nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒02‒12
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. A Note on Optimal Inference in the Linear IV Model By Donald W.K. Andrews; Vadim Marmer; Zhengfei Yu
  2. Confidence Intervals for Projections of Partially Identified Parameters By Stoye, Joerg; Kaido, Hiroaki; Molinari, Francesca
  3. Representation, Estimation and Forecasting of the Multivariate Index-Augmented Autoregressive Model By Gianluca Cubadda; Barbara Guardabascio
  4. Multiplicative Conditional Correlation Models for Realized Covariance Matrices By BAUWENS, Luc; BRAIONE, Manuela; STORTI, Giuseppe
  5. Alternative moment conditions and an efficient GMM estimator for dynamic panel data models By Federico Zincenko
  6. Bayesian Semiparametric Forecasts of Real Interest Rate Data By DESCHAMPS, Philippe J.
  7. Estimation of a noisy subordinated Brownian Motion via two-scales power variations By Jose E. Figueroa-Lopez; K. Lee
  8. Affine Variance Swap Curve Models By Damir Filipović
  9. Mixture Normal Conditional Correlation Models By Maria Putintseva
  10. Spatial differentiation for sample selection models By Alex Klein; Guy Tchuente
  11. The Spatial Efficiency Multiplier and Random Effects in Spatial Stochastic Frontier Models By Glass, Anthony J.; Kenjegalieva, Karligash; Sickles, Robin C.; Weyman-Jones, Thomas
  12. A note on the identification and transmission of energy demand and supply shocks By Michelle, Gilmartin
  13. Measuring firm size distribution with semi-nonparametric densities By Lina Cortés; Andrés Mora-Valencia; Javier Perote
  14. A novel multivariate risk measure: the Kendall VaR By Matthieu Garcin; Dominique Guegan; Bertrand Hassani
  15. Solving Heterogeneous Estimating Equations with Gradient Forests By Athey, Susan; Tibshirani, Julie; Wager, Stefan
  16. Tail event driven networks of SIFIs By Cathy Yi-Hsuan Chen; Wolfgang Karl Härdle; Yarema Okhrin;
  17. Negative binomial quasi-likelihood inference for general integer-valued time series models By Aknouche, Abdelhakim; Bendjeddou, Sara

  1. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Vadim Marmer (University of British Columbia); Zhengfei Yu (University of Tsukuba)
    Abstract: This paper considers tests and confidence sets (CS?s) concerning the coefficient on the endogenous variable in the linear IV regression model with homoskedastic normal errors and one right-hand side endogenous variable. The paper derives a finite-sample lower bound function for the probability that a CS constructed using a two-sided invariant similar test has infinite length and shows numerically that the conditional likelihood ratio (CLR) CS of Moreira (2003) is not always very close to this lower bound function. This implies that the CLR test is not always very close to the two-sided asymptotically-efficient (AE) power envelope for invariant similar tests of Andrews, Moreira, and Stock (2006) (AMS). On the other hand, the paper establishes the ?finite-sample optimality of the CLR test when the correlation between the structural and reduced-form errors, or between the two reduced-form errors, goes to 1 or -1 and other parameters are held constant, where optimality means achievement of the two-sided AE power envelope of AMS. These results cover the full range of (non-zero) IV strength. The paper investigates in detail scenarios in which the CLR test is not on the two-sided AE power envelope of AMS. Also, the paper shows via theory and numerical work that the CLR test is close to having greatest average power, where the average is over a grid of concentration parameter values and over pairs alternative hypothesis values of the parameter of interest, uniformly over pairs of alternative hypothesis values and uniformly over the correlation between the structural and reduced-form errors.
    Keywords: Conditional likelihood ratio test, Con?dence interval, Infi?nite length, Linear instrumental variables, Optimal test, Weighted average power, Similar test
    JEL: C12 C36
    Date: 2017–01
  2. By: Stoye, Joerg; Kaido, Hiroaki; Molinari, Francesca
    Abstract: This paper proposes a bootstrap-based procedure to build confidence intervals for single components of a partially identified parameter vector, and for smooth functions of such components, in moment (in)equality models. The extreme points of our confidence interval are obtained by maximizing/minimizing the value of the component (or function) of interest subject to the sample analog of the moment (in)equality conditions properly relaxed. The novelty is that the amount of relaxation, or critical level, is computed so that the component (or function) of 0, instead of 0 itself, is uniformly asymptotically covered with prespecified probability. Calibration of the critical level is based on repeatedly checking feasibility of linear programming problems, rendering it computationally attractive. Computation of the extreme points of the confidence interval is based on a novel application of the response surface method for global optimization, which may prove of independent interest also for applications of other methods of inference in the moment (in)equalities literature. The critical level is by construction smaller (in finite sample) than the one used if projecting confidence regions designed to cover the entire parameter vector 0. Hence, our confidence interval is weakly shorter than the projection of established confidence sets (Andrews and Soares, 2010), if one holds the choice of tuning parameters constant. We provide simple conditions under which the comparison is strict. Our inference method controls asymptotic coverage uniformly over a large class of data generating processes. Our assumptions and those used in the leading alternative approach (a profiling based method) are not nested. We explain why we employ some restrictions that are not required by other methods and provide examples of models for which our method is uniformly valid but profiling based methods are not.
    JEL: C10 C14 C15
    Date: 2016
  3. By: Gianluca Cubadda (DEF and CEIS, University of Rome "Tor Vergata"); Barbara Guardabascio (ISTAT)
    Abstract: We examine the conditions under which each individual series that is generated by a vector autoregressive model can be represented as an autoregressive model that is augmented with the lags of few linear combinations of all the variables in the system. We call this modelling Multivariate Index-Augmented Autoregression (MIAAR). We show that the parameters of the MIAAR can be estimated by a switching algorithm that increases the Gaussian likelihood at each iteration. Since maximum likelihood estimation may perform poorly when the number of parameters gets larger, we propose a regularized version of our algorithm to handle a medium-large number of time series. We illustrate the usefulness of the MIAAR modelling both by empirical applications and simulations.
    Keywords: Multivariate index autoregressive models, reduced rank regression, dimension reduction, shrinkage estimation, macroeconomic forecasting
    JEL: C32
    Date: 2017–02–07
  4. By: BAUWENS, Luc (Université catholique de Louvain, CORE, Belgium); BRAIONE, Manuela (Université catholique de Louvain, CORE, Belgium); STORTI, Giuseppe (Università di Salerno)
    Abstract: We introduce a class of multiplicative dynamic models for realized covariance matrices assumed to be conditionally Wishart distributed. The multiplicative structure enables consistent three-step estimation of the parameters, starting by covariance targeting of a scale matrix. The dynamics of conditional variances and correlations are inspired by specifications akin to the consistent dynamic conditional correlation model of the multivariate GARCH literature, and estimation is performed by quasi maximum likelihood. Simulations show that in finite samples the three-step estimator has smaller bias and root mean squared error than the full estimator when the cross-sectional dimension increases. An empirical application illustrates the flexibility of these models in a low-dimensional setting, and another one illustrates their e ectiveness and practical usefulness in high dimensional portfolio allocation strategies.
    Keywords: Dynamic conditional correlations, Wishart distribution, Multiplicative models, Realized covariances
    Date: 2016–11–24
  5. By: Federico Zincenko
    Abstract: This paper proposes a set of moment conditions for the estimation of lineardynamic panel data models. In the spirit of Chamberlain's (1982, 1984)approach, these conditions arise from parameterizing the relationship betweencovariates and unobserved time invariant e ffects. A GMM framework is used toderive an optimal estimator, with no efficiency loss compared to classic alternativeslike Arellano and Bond (1991) and Ahn and Schmidt (1995, 1997). Still,Monte Carlo results suggest that the new procedure peforms better than thesealternatives when covariates are non-stationary. The framework also leads to avery simple test for unobserved eff ects.
    Date: 2017–01
  6. By: DESCHAMPS, Philippe J. (Université catholique de Louvain, CORE, Belgium)
    Abstract: The non-hierarchical Dirichlet process prior has been mainly used for parameters of innovation distributions. It is, however, easy to apply to all the parameters (coefficients of covariates and innovation variance) of more general regression models. This paper investigates the predictive performance of a simple (non-hierarchical) Dirichlet process mixture of Gaussian autoregressions for forecasting monthly US real interest rate data. The results suggest that the number of mixture components increases sharply over time, and the predictive marginal likelihoods strongly dominate those of a benchmark autoregressive model. Unconditional predictive coverage is vastly improved in the mixture model.
    Keywords: Dirichlet process mixture, Bayesian nonparametrics, structural change, real interest rate
    JEL: C11 C14 C22 C53
    Date: 2016–11–01
  7. By: Jose E. Figueroa-Lopez; K. Lee
    Abstract: High frequency based estimation methods for a semiparametric pure-jump subordinated Brownian motion exposed to a small additive microstructure noise are developed building on the two-scales realized variations approach originally developed by Zhang et. al. (2005) for the estimation of the integrated variance of a continuous Ito process. The proposed estimators are shown to be robust against the noise and, surprisingly, to attain better rates of convergence than their precursors, method of moment estimators, even in the absence of microstructure noise. Our main results give approximate optimal values for the number K of regular sparse subsamples to be used, which is an important tune-up parameter of the method. Finally, a data-driven plug-in procedure is devised to implement the proposed estimators with the optimal K-value. The developed estimators exhibit superior performance as illustrated by Monte Carlo simulations and a real high-frequency data application.
    Date: 2017–02
  8. By: Damir Filipović (Ecole Polytechnique Fédérale de Lausanne and Swiss Finance Institute)
    Abstract: This paper provides a brief overview of the stochastic modeling of variance swap curves. Focus is on affine factor models. We propose a novel drift parametrization which assures that the components of the state process can be matched with any pre-specified points on the variance swap curve. This should facilitate the empirical estimation for such stochastic models. Moreover, sufficient and yet flexible conditions that guarantee positivity of the rates are readily available. We finally discuss the relation and differences to affine yield-factor models introduced by Duffie and Kan. It turns out that, in contrast to variance swap models, their yield factor representation requires imposing constraints on systems of nonlinear equations that are often not solvable in closed form.
    Keywords: Affine variance swap rate factor models, Variance swaps, VIX
    JEL: G13 C51
  9. By: Maria Putintseva (University of Zurich, Ecole Polytechnique Fédérale de Lausanne, and Swiss Finance Institute)
    Abstract: I propose a class of hybrid models to describe and predict the dynamics of a multivariate stationary random vector, e.g. a vector of stock returns. These models combine essential features of the multivariate mixture normal distribution and the conditional correlation models. I describe in detail the expectation-maximization algorithm, which makes the parameter estimation feasible and fast virtually for any random vector length. I fit the suggested models to five data sets, consisting of vectors of stock returns, with the maximal vector length of fifteen stocks. The predictive ability of this model class is compared to other widely used multivariate models, and it turns out that my models provide the best forecasts, both on average and for extreme negative returns. All necessary formulas to apply these models for important financial objectives are also provided.
    Keywords: Finite Mixtures, Dynamic Conditional Correlation, Forecasting, Multivariate Modelling, Predictive Ability
    JEL: C51 C53 G17
  10. By: Alex Klein; Guy Tchuente
    Abstract: This paper uses spatial differencing to estimate parameters in sample selection models with unobserved heterogeneity. We show that under the assumption of smooth changes across the space of unobserved site-specific heterogeneity and selection probability, key parameters of a sample selection model are identified. A simple estimation procedure is proposed and the formula for the estimator of the standard error is derived.
    Keywords: Sample selection; spatial difference; instrumental variable
    JEL: C13 C31
    Date: 2017–01
  11. By: Glass, Anthony J. (Loughborough University); Kenjegalieva, Karligash (Loughborough University); Sickles, Robin C. (Rice University and Loughborough University); Weyman-Jones, Thomas (Loughborough University)
    Abstract: We extend the emerging literature on spatial frontier models in three respects. Firstly, we account for latent heterogeneity by developing a maximum likelihood random effects spatial autoregressive (SAR) stochastic frontier model. Secondly, to analyze the finite sample properties of a spatial stochastic frontier model we develop a Monte Carlo experimental methodology which we then apply. Thirdly, we introduce the concept of the spatial efficiency multiplier and show that the efficiency benchmark for a productive unit from the structural form of a spatial stochastic frontier model differs from the efficiency benchmark from the reduced form of the model.
    JEL: C23 C51 D24 Q10
    Date: 2016–11
  12. By: Michelle, Gilmartin
    Abstract: This paper proposes and implements a novel structural VAR approach for identifying oil demand and supply shocks. In this approach we search for two shocks in the context of a VAR model, which explain the majority of the k-step ahead prediction error variances of oil prices. Finally, we compare our approach with alternative identification schemes based on sign restrictions, and we show that the proposed methods is a useful tool for decomposing oil shocks.
    Keywords: oil price shocks; demand and supply; Bayesian VAR; MCMC
    JEL: B4 E32 N10 Q43
    Date: 2016–01–09
  13. By: Lina Cortés; Andrés Mora-Valencia; Javier Perote
    Abstract: In this article, we propose a new methodology based on a (log) semi-nonparametric (log- SNP) distribution that nests the lognormal and enables better fits in the upper tail of the distribution through the introduction of new parameters. We test the performance of the lognormal and log-SNP distributions capturing firm size, measured through a sample of US firms in 2004-2015. Taking different levels of aggregation by type of economic activity, our study shows that the log-SNP provides a better fit of the firm size distribution. We also formally introduce the multivariate log-SNP distribution, which encompasses the multivariate lognormal, to analyze the estimation of the joint distribution of the value of the firm’s assets and sales. The results suggest that sales are a better firm size measure, as indicated by other studies in the literature.
    Keywords: Firms size distribution; Heavy tail distributions; Semi-nonparametric modeling; Bivariate distributions.
    JEL: C14 C53 L11
    Date: 2017–01–16
  14. By: Matthieu Garcin (Natixis Asset Management and LabEx ReFi); Dominique Guegan (Centre d'Economie de la Sorbonne and LabEx ReFi); Bertrand Hassani (Grupo Santander and Centre d'Economie de la Sorbonne and LabEx ReFi)
    Abstract: The definition of multivariate Value at Risk is a challenging problem, whose most common solutions are given by the lower- and upper-orthant VaRs, which are based on copulas: the lower-orthant VaR is indeed the quantile of the multivariate distribution function, whereas the upper-orthant VaR is the quantile of the multivariate survival function. In this paper we introduce a new approach introducing a total-order multivariate Value at Risk, referred to as the Kendall Value at Risk, which links the copula approach to an alternative definition of multivariate quantiles, known as the quantile surface, which is not used in finance, to our knowledge. We more precisely transform the notion of orthant VaR thanks to the Kendall function so as to get a multivariate VaR with some advantageous properties compared to the standard orthant VaR: it is based on a total order and, for a non-atomic and Rd-supported density function, there is no distinction anymore between the d-dimensional VaRs based on the distribution function or on the survival function. We quantify the differences between this new kendall VaR and orthant VaRs. In particular, we show that the Kendall VaR is less (respectively more) conservative than the lower-orthant (resp. upper-orthant) VaR. The definition and the properties of the Kendall VaR are illustrated using Gumbel and Clayton copulas with lognormal marginal distributions and several levels of risk
    Keywords: Value at Risk; multivariate quantile; risk measure; Kendall function; copula; total order
    JEL: C1 C6
    Date: 2017–01
  15. By: Athey, Susan (Stanford University); Tibshirani, Julie (Palantir Technologies); Wager, Stefan (Columbia University and Stanford University)
    Abstract: Forest-based methods are being used in an increasing variety of statistical tasks, including causal inference, survival analysis, and quantile regression. Extending forest-based methods to these new statistical settings requires specifying tree-growing algorithms that are targeted to the task at hand, and the ad-hoc design of such algorithms can require considerable effort. In this paper, we develop a unified framework for the design of fast tree-growing procedures for tasks that can be characterized by heterogeneous estimating equations. The resulting gradient forest consists of trees grown by recursively applying a pre-processing step where we label each observation with gradient-based pseudo-outcomes, followed by a regression step that runs a standard CART regression split on these pseudo-outcomes. We apply our framework to two important statistical problems, non-parametric quantile regression and heterogeneous treatment effect estimation via instrumental variables, and we show that the resulting procedures considerably outperform baseline forests whose splitting rules do not take into account the statistical question at hand. Finally, we prove the consistency of gradient forests, and establish a central limit theorem. Our method will be available as an R-package, gradientForest, which draws from the ranger package for random forests.
    Date: 2016–10
  16. By: Cathy Yi-Hsuan Chen; Wolfgang Karl Härdle; Yarema Okhrin;
    Abstract: The interdependence, dynamics and riskiness of financial institutions are the key features frequently tackled in financial econometrics. We propose a Tail Event driven Network Quantile Regression (TENQR) model which addresses these three aspects. More precisely, our framework captures the risk propagation and dynamics in terms of a quantile (or expectile) autoregression involving network effects quantified through an adjacency matrix. To reflect the nature and risk content of systemic risk, the construction of the adjacency matrix is suggested to include tail event covariates. The model is evaluated using the SIFIs (systemically important financial institutions) identified by the Financial Stability Board (FSB) as main players in the global financial system. The risk decomposition analysis of it identifies the systemic importance of SIFIs and thus provides measures for the required level of additional loss absorbency. It is discovered that the network effect, as a function of the tail probability, becomes more profound in stress situations and brings the various impacts to the SIFIs located in different geographic regions.
    Keywords: systemic risk; network analysis; network autoregression
    JEL: C01 C14 C58 C45 G01 G15 G31
    Date: 2017–01
  17. By: Aknouche, Abdelhakim; Bendjeddou, Sara
    Abstract: Two negative binomial quasi-maximum likelihood estimates (NB-QMLE's) for a general class of count time series models are proposed. The first one is the profile NB-QMLE calculated while arbitrarily fixing the dispersion parameter of the negative binomial likelihood. The second one, termed two-stage NB-QMLE, consists of four stages estimating both conditional mean and dispersion parameters. It is shown that the two estimates are consistent and asymptotically Gaussian under mild conditions. Moreover, the two-stage NB-QMLE enjoys a certain asymptotic efficiency property provided that a negative binomial link function relating the conditional mean and conditional variance is specified. The proposed NB-QMLE's are compared with the Poisson QMLE asymptotically and in finite samples for various well-known particular classes of count time series models such as the (Poisson and negative binomial) Integer GARCH model and the INAR(1) model. Applications to two real datasets are given.
    Keywords: Integer-valued time series models, Integer GARCH, Integer AR, Generalized Linear Models, Quasi-likelihood, Geometric QMLE, Negative Binomial QMLE, Poisson QMLE, consistency and asymptotic normality.
    JEL: C01 C13 C18 C51
    Date: 2016–12–06

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.