nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒05‒24
nineteen papers chosen by
Sune Karlsson
Orebro University

  1. Common Correlated Effects Estimation of Heterogeneous Dynamic Panel Data Models with Weakly Exogenous Regressors By Pesaran, Hashem; Chudik, Alexander
  3. Binary Choice Models with Discrete Regressors: Identification and Misspecification By Tatiana Komarova
  4. Likelihood Inference in Some Finite Mixture Models By Xiaohong Chen; Maria Ponomareva; Elie Tamer
  5. Bayesian Model Averaging for Generalized Linear Models with Missing Covariates By Valentino Dardanoni; Giuseppe De Luca; Salvatore Modica; Franco Peracchi
  6. Bayesian multivariate Bernstein polynomial density estimation By Yanyun Zhao; Concepción Ausín; Michael P. Wiper
  7. A Bayesian non-parametric approach to asymmetric dynamic conditional correlation model with application to portfolio selection By Audrone Virbickaite; Concepción Ausín; Pedro Galeano
  8. One for all : nesting asymmetric stochastic volatility models By Xiuping Mao; Esther Ruiz; Helena Veiga
  9. Time-varying structural vector autoregressions and monetary policy: a corrigendum By Marco Del Negro; Giorgio Primiceri
  10. The Calculation of Some Limiting Distributions Arising in Near-Integrated Models with GLS Detrending By Marcus J. Chambers
  11. Optimal estimation of a large-dimensional covariance matrix under Stein’s loss By Olivier Ledoit; Michael Wolf
  12. Assessing Relative Volatility/Intermittency/Energy Dissipation By Ole E. Barndorff-Nielsen; Mikko S. Pakkanen; Jürgen Schmiegel
  13. A new distance for data sets (and probability measures) in a RKHS context By Gabriel Martos
  14. The Mahalanobis distance for functional data with applications to classification By Esdras Joseph; Pedro Galeano; Rosa E. Lillo
  15. A Regime Switching Skew-normal Model for Measuring Financial Crisis and Contagion By Joshua C.C. Chan; Cody Yu-Ling Hsiao; Renée A. Fry-McKibbin
  16. Lasso variable selection in functional regression By Nicola Mingotti; Rosa E. Lillo; Juan Romo
  17. The Number of Traded Shares: A Time Series Modelling Approach By Brännäs, Kurt
  18. Simultaneous-equations Analysis in Regional Science and Economic Geography By Mitze, Timo; Stephan, Andreas

  1. By: Pesaran, Hashem; Chudik, Alexander
    Abstract: This paper extends the Common Correlated Effects (CCE) approach developed by Pesaran (2006) to heterogeneous panel data models with lagged dependent variable and/or weakly exogenous regressors. We show that the CCE mean group estimator continues to be valid but the following two conditions must be satis?ed to deal with the dynamics: a sufficient number of lags of cross section averages must be included in individual equations of the panel, and the number of cross section averages must be at least as large as the number of unobserved common factors. We establish consistency rates, derive the asymptotic distribution, suggest using co- variates to deal with the effects of multiple unobserved common factors, and consider jackknife and recursive de-meaning bias correction procedures to mitigate the small sample time series bias. Theoretical ?findings are accompanied by extensive Monte Carlo experiments, which show that the proposed estimators perform well so long as the time series dimension of the panel is sufficiently large.
    Keywords: Large panels, lagged dependent variable, cross sectional dependence, coefficient heterogeneity, estimation and inference, common correlated effects, unobserved common factors.
    JEL: C31 C33
    Date: 2013–05–17
  2. By: Javier Hidalgo; Myung Hwan Seo
    Abstract: We consider an omnibus test for the correct speci…cation of the dynamics of a sequence fx (t)gt2Zd in a lattice. As it happens with causal models and d = 1, its asymptotic distribution is not pivotal and depends on the estimator of the unknown parameters of the model under the null hypothesis. One of our main goals of the paper is to provide a transformation to obtain an asymptotic distribution that is free of nuisance parameters. Secondly, we propose a bootstrap analogue of the transformation and show its validity. Third, we discuss the results when fx (t)gt2Zd are the errors of a parametric regression model. As a by product, we also discuss the asymptotic normality of the least squares estimators under very mild conditions. Finally, we present a small Monte Carlo experiment to shed some light on the …nite sample behaviour of our test.
    Keywords: Specification test, Spatial processes, Lattice, Spectral domain, CUSUM, Bootstrap.
    JEL: C21 C23
    Date: 2013–05
  3. By: Tatiana Komarova
    Abstract: In semiparametric binary response models, support conditions on the regressors are required to guarantee point identification of the parameter of interest. For example,one regressor is usually assumed to have continuous support conditional on the other regressors. In some instances, such conditions have precluded the use of these models; in others, practitioners have failed to consider whether the conditions are satisfied in their data. This paper explores the inferential question in these semiparametric models when the continuous support condition is not satisfied and all regressors have discrete support. I suggest a recursive procedure that finds sharp bounds on the components of the parameter of interest and outline several applications, focusing mainly on the models under the conditional median restriction, as in Manski (1985). After deriving closed-form bounds on the components of the parameter, I show how these formulas can help analyze cases where one regressor's support becomes increasingly dense. Furthermore, I investigate asymptotic properties of estimators of the identification set. I describe a relation between the maximum score estimation and support vector machines and also propose several approaches to address the problem of empty identification sets when a model is misspecified. Finally, I present a Monte Carlo experiment and an empirical illustration to compare several estimation techniques.
    Keywords: Binary response models, Discrete regressors, Partial identification, Misspecification,Support vector machines
    JEL: C2 C10 C14 C25
    Date: 2012–05
  4. By: Xiaohong Chen (Cowles Foundation, Yale University); Maria Ponomareva (Dept. of Economics, Northern Illinois University); Elie Tamer (Dept. of Economics, Northwestern University)
    Abstract: Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to 1) lack of point identification, and 2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.
    Keywords: Finite mixtures, Parametric bootstrap, Profiled likelihood ratio statistic, Partial identification, Parameter on the boundary
    Date: 2013–05
  5. By: Valentino Dardanoni (University of Palermo); Giuseppe De Luca (University of Palermo); Salvatore Modica (University of Palemo); Franco Peracchi (University of Rome "Tor Vergata" and EIEF)
    Abstract: We address the problem of estimating generalized linear models (GLMs) when the outcome of interest is always observed, the values of some covariates are missing for some observations, but imputations are available to fill-in the missing values. Under certain conditions on the missing-data mechanism and the imputation model, this situation generates a trade-off between bias and precision in the estimation of the parameters of interest. The complete cases are often too few, so precision is lost, but just filling-in the missing values with the imputations may lead to bias when the imputation model is either incorrectly specified or uncongenial. Following the generalized missing-indicator approach originally proposed by Dardanoni et al. (2011) for linear regression models, we characterize this bias-precision trade- off in terms of model uncertainty regarding which covariates should be dropped from an augmented GLM for the full sample of observed and imputed data. This formulation is attractive because model uncertainty can then be handled very naturally through Bayesian model averaging (BMA). In addition to applying the generalized missing-indicator method to the wider class of GLMs, we make two extensions. First, we propose a block-BMA strategy that incorporates information on the available missing-data patterns and has the advantage of being computationally simple. Second, we allow the observed outcome to be multivariate, thus covering the case of seemingly unrelated regression equations models, and ordered, multinomial or conditional logit and probit models. Our approach is illustrated through an empirical application using the first wave of the Survey on Health, Aging and Retirement in Europe (SHARE).
    Date: 2013
  6. By: Yanyun Zhao; Concepción Ausín; Michael P. Wiper
    Abstract: This paper introduces a new approach to Bayesian nonparametric inference for densities on the hypercube, based on the use of a multivariate Bernstein polynomial prior. Posterior convergence rates under the proposed prior are obtained. Furthermore, a novel sampling scheme, based on the use of slice sampling techniques, is proposed for estimation of the posterior predictive density. The approach is illustrated with both simulated and real data examples
    Keywords: Bayesian nonparametrics, Bernstein polynomials, Dirichlet process
    Date: 2013–06
  7. By: Audrone Virbickaite; Concepción Ausín; Pedro Galeano
    Abstract: We use an asymmetric dynamic conditional correlation (ADCC) GJR-GARCH model to estimate the time-varying volatilities of financial returns. The ADCC-GJR-GARCH model takes into consideration the asymmetries in individual assets volatilities, as well as in the correlations. The errors are modeled using a flexible location-scale mixture of infinite Gaussian distributions and the inference and estimation is carried out by relying on Bayesian non-parametrics. Finally, we carry out a simulation study to illustrate the flexibility of the new method and present a financial application using Apple and NASDAQ Industrial index data to solve a portfolio allocation problem
    Keywords: Asymmetric dynamic condition correlation, Bayesian non-parametrics, Dirichlet process mixtures, Portfolio allocation
    Date: 2013–05
  8. By: Xiuping Mao; Esther Ruiz; Helena Veiga
    Abstract: This paper proposes a new stochastic volatility model to represent the dynamic evolution of conditionally heteroscedastic series with leverage effect. Although there are already several models proposed in the literature with the same purpose, our main justification for a further new model is that it nests some of the most popular stochastic volatility specifications usually implemented to real time series of financial returns. We derive closed-form expressions of its statistical properties and, consequently, of those of the nested specifications. Some of these properties were previously unknown in the literature although the restricted models are often fitted by empirical researchers. By comparing the properties of the restricted models, we are able to establish the advantages and limitations of each of them. Finally, we analyze the performance of a MCMC estimator of the parameters and volatilities of the new proposed model and show that it has appropriate finite sample properties. Furthermore, estimating the new model using the MCMC estimator, one can correctly identify the restricted specifications. All the results are illustrated by estimating the parameters and volatilities of simulated time series and of a series of daily S&P500 returns
    Keywords: EGARCH, Leverage effect, MCMC estimator, Stochastic News Impact Surface, Threshold Stochastic Volatility, WinBUGS
    Date: 2013–05
  9. By: Marco Del Negro; Giorgio Primiceri
    Abstract: This note corrects a mistake in the estimation algorithm of the time-varying structural vector autoregression model of Primiceri (2005) and proposes a new algorithm that correctly applies the procedure proposed by Kim, Shephard, and Chib (1998) to the estimation of VAR or DSGE models with stochastic volatility. Relative to Primiceri (2005), the correct algorithm involves a different ordering of the various Markov Chain Monte Carlo steps.
    Keywords: Markov processes ; Regression analysis ; Econometric models
    Date: 2013
  10. By: Marcus J. Chambers
    Abstract: Many unit root test statistics are nowadays constructed using detrended data, with the method of GLS detrending being popular in the setting of a near-integrated model. This paper determines the properties of some associated limiting distributions when the GLS detrending is based on a linear time trend. A fundamental result for the moment generating function of two key functionals of the relevant stochastic process is provided and used to compute probability density functions and cumulative distribution functions, as well as means and variances, of the limiting distributions of some statistics of interest. Some further applications, including a comparison of limiting power functions and the consideration of a more complicated statistic, are also provided.
    Date: 2013–05–21
  11. By: Olivier Ledoit; Michael Wolf
    Abstract: This paper revisits the methodology of Stein (1975, 1986) for estimating a covariance matrix in the setting where the number of variables can be of the same magnitude as the sample size. Stein proposed to keep the eigenvectors of the sample covariance matrix but to shrink the eigenvalues. By minimizing an unbiased estimator of risk, Stein derived an ‘optimal’ shrinkage transformation. Unfortunately, the resulting estimator has two pitfalls: the shrinkage transformation can change the ordering of the eigenvalues and even make some of them negative. Stein suggested an ad hoc isotonizing algorithm that post-processes the transformed eigenvalues and thereby fixes these problems. We offer an alternative solution by minimizing the limiting expression of the unbiased estimator of risk under large-dimensional asymptotics, rather than the finite-sample expression. Compared to the isotonized version of Stein’s estimator, our solution is theoretically more elegant and also delivers improved performance, as evidenced by Monte Carlo simulations.
    Keywords: Large-dimensional asymptotics, nonlinear shrinkage estimation, random matrix theory, rotation equivariance, Stein’s loss
    JEL: C13
    Date: 2013–05
  12. By: Ole E. Barndorff-Nielsen (Aarhus University, THIELE Center and CREATES); Mikko S. Pakkanen (Aarhus University and CREATES); Jürgen Schmiegel (Aarhus University)
    Abstract: We introduce the notion of relative volatility/intermittency and demonstrate how relative volatility statistics can be used to estimate consistently the temporal variation of volatility/intermittency even when the data of interest are generated by a non-semimartingale, or a Brownian semistationary process in particular. While this estimation method is motivated by the assessment of relative energy dissipation in empirical data of turbulence, we apply it also to energy price data. Moreover, we develop a probabilistic asymptotic theory for relative power variations of Brownian semistationary processes and Ito semimartingales and discuss how it can be used for inference on relative volatility/intermittency.
    Keywords: Brownian semistationary process, energy dissipation, intermittency, power variation, turbulence, volatility.
    JEL: C10 C14
    Date: 2013–07–05
  13. By: Gabriel Martos
    Abstract: In this paper we define distance functions for data sets (and distributions) in a RKHS context. To this aim we introduce kernels for data sets that provide a metrization of the set of points sets (the power set). An interesting point in the proposed kernel distance is that it takes into account the underlying (data) generating probability distributions. In particular, we propose kernel distances that rely on the estimation of density level sets of the underlying distribution, and can be extended from data sets to probability measures. The performance of the proposed distances is tested on a variety of simulated distributions plus a couple of real pattern recognition problems
    Keywords: Probability measures, Kernel, Level sets, Distances for data sets
    Date: 2013–05
  14. By: Esdras Joseph; Pedro Galeano; Rosa E. Lillo
    Abstract: This paper presents a general notion of Mahalanobis distance for functional data that extends the classical multivariate concept to situations where the observed data are points belonging to curves generated by a stochastic process. More precisely, a new semi-distance for functional observations that generalize the usual Mahalanobis distance for multivariate datasets is introduced. For that, the development uses a regularized square root inverse operator in Hilbert spaces. Some of the main characteristics of the functional Mahalanobis semi-distance are shown. Afterwards, new versions of several well known functional classification procedures are developed using the Mahalanobis distance for functional data as a measure of proximity between functional observations. The performance of several well known functional classification procedures are compared with those methods used in conjunction with the Mahalanobis distance for functional data, with positive results, through a Monte Carlo study and the analysis of two real data examples
    Keywords: Classification methods Functional data analysis, Functional Mahalanobis semi-distance, Functional principal components
    Date: 2013–05
  15. By: Joshua C.C. Chan; Cody Yu-Ling Hsiao; Renée A. Fry-McKibbin
    Abstract: A regime switching skew-normal model for financial crisis and contagion is proposed in which we develop a new class of multiple-channel crisis and contagion tests. Crisis channels are measured through changes in ‘own’ moments of the mean, variance and skewness, while contagion is through changes in the covariance and co-skewness of the joint distribution of asset returns. In this framework: i) linear and non-linear dependence is allowed; ii) transmission channels are simultaneously examined; iii) crisis and contagion are distinguished and individually modeled; iv) the market that a crisis originates is endogenous; and v) the timing of a crisis is endogenous. In an empirical application, we apply the proposed model to equity markets during the Great Recession using Bayesian model comparison techniques to assess the multiple channels of crisis and contagion. The results generally show that crisis and contagion are pervasive across Europe and the US. The second moment channels of crisis and contagion are systematically more evident than the first and third moment channels.
    Keywords: Great Recession, Crisis tests, Contagion tests, Co-skewness, Regime switching skew-normal model, Gibbs sampling, Bayesian model comparison
    JEL: C11 C34 G15
    Date: 2013–03
  16. By: Nicola Mingotti; Rosa E. Lillo; Juan Romo
    Abstract: Functional Regression has been an active subject of research in the last two decades but still lacks a secure variable selection methodology. Lasso is a well known effective technique for parameters shrinkage and variable selection in regression problems. In this work we generalize the Lasso technique to select variables in the functional regression framework and show it performs well. In particular, we focus on the case of functional regression with scalar regressors and functional response. Reduce the associated functional optimization problem to a convex optimization on scalars. Find its solutions and stress their interpretability. We apply the technique to simulated data sets as well as to a new real data set: car velocity functions in low speed car accidents, a frequent cause of whiplash injuries. By “Functional Lasso” we discover which car characteristics influence more car speed and which can be considered not relevant
    Keywords: Norm one penalization, Variable selection, Algebraic re-duction, Convex optimization, Computer algebra
    Date: 2013–05
  17. By: Brännäs, Kurt (Department of Economics, Umeå School of Business and Economics)
    Abstract: This short paper proposes a characterization for the number of traded shares or trading volume in terms of its data generating process. Share ownership plays a vital role. An empirical illustration based on the Nokia stock is included. Long memory in trading volume is linked to the long memory feature of ownership.
    Keywords: Integer-valued; INAR; Ownership; Trading volume; Stock; Nokia
    JEL: C22 C51 C58 G10 G32
    Date: 2013–05–20
  18. By: Mitze, Timo (University of Southern Denmark); Stephan, Andreas (Jönköping International Business School)
    Abstract: This paper provides an overview over simultaneous equation models (SEM) in the context of analyses based on regional data. We describe various modelling approaches and highlight close link of SEMs to theory and also comment on the advantages and disadvantages of SEMs.We present selected empirical works using simultaneous-equations analysis in regional science and economic geography in or-der to show the wide scope for applications. We thereby classify the empirical contributions as either being structural model presentations or vector autoregressive (VAR) models. Finally, we provide the reader with some details on how the various models can be estimated with available software packages such as STATA, LIMDEP or Gauss.
    Keywords: Structural Equation Models; Regional Science and Economics; Empirical Applications; Software
    JEL: C33 C87
    Date: 2013–05–14
  19. By: Taisuke Otsu; Yoshiyasu Rai
    Abstract: This paper revisits testability of complementarity in economic models with multiple equilibria studied by Echenique and Komunjer (2009). We find that Echenique and Komunjer's (2009) testable implications on extreme quantiles can be implied by a weaker version of their tail condition without complementarity, and on the other hand, a slightly stronger version of complementarity implies their testable implications without the tail condition.
    Keywords: Complementarity, Testability, Quantile.
    JEL: C10
    Date: 2013–02

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.