nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒12‒19
ten papers chosen by
Sune Karlsson
Örebro universitet

  2. A Model Validation Procedure By Julia Polak; Maxwell L. King; Xibin Zhang
  3. Bayesian Inference for a Semi-Parametric Copula-based Markov Chain By Azam, Kazim; Pitt, Michael
  4. "Unified Improvements in Estimation of a Normal Covariance Matrix in High and Low Dimesions" By Hisayuki Tsukuma; Tatsuya Kubokawa
  5. "On Improved Shrinkage Estimators for Concave Loss" By Tatsuya Kubokawa; Éric Marchand; William E. Strawderman
  6. A note on the estimation of a Gamma-Variance process: Learning from a failure By Gian P. Cervellera; Marco P. Tucci
  7. Testing for Selection Bias By Joo, Joonhwi; LaLonde, Robert J.
  8. Generalized Dynamic Factor Models and Volatilities. Recovering the Market Volatility Shocks By Matteo Barigozzi; Marc Hallin
  9. Resurgence of instrument variable estimation and fallacy of endogeneity By Qin, Duo
  10. Variable Selection in Predictive MIDAS Models By C. Marsilli

  1. By: Gabriele Fiorentini (Università di Firenze); Enrique Sentana (CEMFI, Centro de Estudios Monetarios y Financieros)
    Abstract: We derive computationally simple and intuitive score tests of neglected serial correlation in unobserved component univariate models using frequency domain techniques. In some common situations in which the information matrix is singular under the null we derive extremum tests that are asymptotically equivalent to likelihood ratio tests, which become one-sided, and explain how to compute reliable Wald tests. We also explicitly relate the incidence of those problems to the model identification conditions and compare our tests with tests based on the reduced form prediction errors. Our Monte Carlo exercises assess the finite sample reliability and power of our proposed tests.
    Keywords: Extremum tests, Kalman filter, LM tests, singular information matrix, spectral maximum likelihood, Wiener-Kolmogorov filter.
    JEL: C22 C52 C12
    Date: 2014–10
  2. By: Julia Polak; Maxwell L. King; Xibin Zhang
    Abstract: Statistical models can play a crucial role in decision making. Traditional model validation tests typically make restrictive parametric assumptions about the model under the null and the alternative hypotheses. The majority of these tests examine one type of change at a time. This paper presents a method for determining whether new data continues to support the chosen model. We suggest using simulation and the kernel density estimator instead of assuming a parametric distribution for the data under the hull hypothesis. This leads to a more versatile testing procedure, one that can be applied to test different types of models and look for a variety of different types of divergences from the null hypothesis. Such a flexible testing procedure, in some cases, can also replace a range of tests that each test against particular alternative hypotheses. The procedure’s ability to recognize a change in the underlying model is demonstrated through AR(1) and linear models. We examine the power of our procedure to detect changes in the variance of the error term and the AR coefficient in the AR(1) model. In the linear model, we examine the performance of the procedure when there are changes in the error variance and error distribution, and when an economic cycle is introduced into the model. We find that the procedure has correct empirical size and high power to recognize the changes in the data generating process after 10 to 15 new observations, depending on the type and extent of the change.
    Keywords: Chow test, model validation, p-value, multivariate kernel density estimation, structural break
    JEL: C12 C14 C52 C53
    Date: 2014
  3. By: Azam, Kazim (Vrije Universiteit, Amsterdam); Pitt, Michael (Department of Economics, University of Warwick)
    Abstract: This paper presents a method to specify a strictly stationary univariate time series model with particular emphasis on the marginal characteristics (fat tailedness, skewness etc.). It is the rst time in time series models with speci ed marginal distribution, a non-parametric speci cation is used. Through a Copula distribution, the marginal aspect are separated and the information contained within the order statistics allow to efficiently model a discretely-varied time series. The estimation is done through Bayesian method. The method is invariant to any copula family and for any level of heterogeneity in the random variable. Using count times series of weekly rearm homicides in Cape Town, South Africa, we show our method efficiently estimates the copula parameter representing the first-order Markov chain transition density. Key words: Bayesian copula ; discrete data ; order statistics ; semi-parametric ; time series. JEL classification: C11 ; C14 ; C20
    Date: 2014
  4. By: Hisayuki Tsukuma (Faculty of Medicine, Toho University); Tatsuya Kubokawa (Faculty of Economics, The University of Tokyo)
    Abstract: The problem of estimating a covariance matrix in multivariate linear regression models is addressed in a decision-theoretic framework. Although a standard loss function is the Stein loss, it is not available in the case of a high dimension. In this paper, a new type of a quadratic loss function, called the intrinsic loss, is suggested, and unified dominance results are derived under the loss, irrespective of order of the dimension, the sample size and the rank of the regression coefficients matrix. Especially, using the Stein-Haff identity, we develop a key inequality which is useful for constructing a truncated and improved estimator based on the information contained in the sample means or the ordinary least squares estimator of the regression coefficients.
    Date: 2014–08
  5. By: Tatsuya Kubokawa (Faculty of Economics, The University of Tokyo); Éric Marchand (Université de Sherbrooke, Departement de mathématiques); William E. Strawderman (Rutgers University, Department of Statistics and Biostatistics,)
    Abstract: We consider minimax shrinkage estimation of a location vector of a spherically symmetric distribution under a loss function which is a concave function of the usual squared error loss. In particular for distributions which are scale mixtures of normals (and somewhat more generally), and for concave loss functions whose derivatives are completely monotone (and somewhat more generally), we give classes of minimax shrinkage estimators where the shrinkage constants are larger than those currently in the literature.
    Date: 2014–07
  6. By: Gian P. Cervellera; Marco P. Tucci
    Abstract: This paper con?rms that, as originally reported in Seneta (2004, p. 183), it is impossible to replicate Madan et al.?s (1998) results using log daily returns on S&P 500 Index from January 1992 to September 1994. This failure leads to a close investigation of the computational problems associated with ?nding maximum likelihood estimates of the parameters of the popular VG model. Both standard econometric software, such as R, and non-standard optimization software, such as Ezgrad described in Tucci (2002), are used. The complexity of the log-likelihood function is studied. It is shown that it looks very complicated, with many local optima, and may be incredibly sensitive to very small changes in the sample used. Adding or removing a single observation may cause huge changes both in the maximum of the log-likelihood function and in the estimated parameter values.
    Keywords: Variance-Gamma, log stock returns, maximum likelihood estimation, globally optimizing procedures
    JEL: C58 C61 C63
    Date: 2014–10
  7. By: Joo, Joonhwi (University of Chicago); LaLonde, Robert J. (Harris School, University of Chicago)
    Abstract: This paper uses the control function to develop a framework for testing for selection bias. The idea behind our framework is if the usual assumptions hold for matching or IV estimators, the control function identifies the presence and magnitude of potential selection bias. Averaging this correction term with respect to appropriate weights yields the degree of selection bias for alternative treatment effects of interest. One advantage of our framework is that it motivates when is appropriate to use more efficient estimators of treatment effects, such as those based on least squares or matching. Another advantage of our approach is that it provides an estimate of the magnitude of the selection bias. We also show how this estimate can help when trying to infer program impacts for program participants not covered by LATE estimates.
    Keywords: selection bias, program evaluation, average treatment effects
    JEL: C21 C26 D04
    Date: 2014–09
  8. By: Matteo Barigozzi; Marc Hallin
    Keywords: volatility; dynamic factor models; block structure
    JEL: C32
    Date: 2014–11
  9. By: Qin, Duo
    Abstract: This paper investigates the nature of the IV method for tackling endogeneity. By tracing the rise and fall of the method in macroeconometrics and its subsequent revival in microeconometrics, it pins the method down to an implicit model respecification device - breaking the circular causality of simultaneous relations by redefining it as an asymmetric one conditioning on a non-optimal conditional expectation of the assumed endogenous explanatory variable, thus rejecting that variable as a valid conditional variable. The revealed nature explains why the IV route is popular for models where endogeneity is superfluous whereas measurement errors are of the key concern.
    Keywords: endogeneity,instrumental variables,simultaneity,omitted variable bias,multicollinearity
    JEL: B23 C13 C18 C50
    Date: 2014
  10. By: C. Marsilli
    Abstract: In short-term forecasting, it is essential to take into account all available information on the current state of the economic activity. Yet, the fact that various time series are sampled at different frequencies prevents an efficient use of available data. In this respect, the Mixed-Data Sampling (MIDAS) model has proved to outperform existing tools by combining data series of different frequencies. However, major issues remain regarding the choice of explanatory variables. The paper first addresses this point by developing MIDAS based dimension reduction techniques and by introducing two novel approaches based on either a method of penalized variable selection or Bayesian stochastic search variable selection. These features integrate a cross-validation procedure that allows automatic in-sample selection based on recent forecasting performances. Then the developed techniques are assessed with regards to their forecasting power of US economic growth during the period 2000-2013 using jointly daily and monthly data. Our model succeeds in identifying leading indicators and constructing an objective variable selection with broad applicability.
    Keywords: Forecasting, Mixed frequency data, MIDAS, Variable selection, GDP.
    JEL: C53 E37
    Date: 2014

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.