
on Econometrics 
By:  Gabriele Fiorentini (Università di Firenze); Enrique Sentana (CEMFI, Centro de Estudios Monetarios y Financieros) 
Abstract:  We derive computationally simple and intuitive score tests of neglected serial correlation in unobserved component univariate models using frequency domain techniques. In some common situations in which the information matrix is singular under the null we derive extremum tests that are asymptotically equivalent to likelihood ratio tests, which become onesided, and explain how to compute reliable Wald tests. We also explicitly relate the incidence of those problems to the model identification conditions and compare our tests with tests based on the reduced form prediction errors. Our Monte Carlo exercises assess the finite sample reliability and power of our proposed tests. 
Keywords:  Extremum tests, Kalman filter, LM tests, singular information matrix, spectral maximum likelihood, WienerKolmogorov filter. 
JEL:  C22 C52 C12 
Date:  2014–10 
URL:  http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2014_1406&r=ecm 
By:  Julia Polak; Maxwell L. King; Xibin Zhang 
Abstract:  Statistical models can play a crucial role in decision making. Traditional model validation tests typically make restrictive parametric assumptions about the model under the null and the alternative hypotheses. The majority of these tests examine one type of change at a time. This paper presents a method for determining whether new data continues to support the chosen model. We suggest using simulation and the kernel density estimator instead of assuming a parametric distribution for the data under the hull hypothesis. This leads to a more versatile testing procedure, one that can be applied to test different types of models and look for a variety of different types of divergences from the null hypothesis. Such a flexible testing procedure, in some cases, can also replace a range of tests that each test against particular alternative hypotheses. The procedure’s ability to recognize a change in the underlying model is demonstrated through AR(1) and linear models. We examine the power of our procedure to detect changes in the variance of the error term and the AR coefficient in the AR(1) model. In the linear model, we examine the performance of the procedure when there are changes in the error variance and error distribution, and when an economic cycle is introduced into the model. We find that the procedure has correct empirical size and high power to recognize the changes in the data generating process after 10 to 15 new observations, depending on the type and extent of the change. 
Keywords:  Chow test, model validation, pvalue, multivariate kernel density estimation, structural break 
JEL:  C12 C14 C52 C53 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201421&r=ecm 
By:  Azam, Kazim (Vrije Universiteit, Amsterdam); Pitt, Michael (Department of Economics, University of Warwick) 
Abstract:  This paper presents a method to specify a strictly stationary univariate time series model with particular emphasis on the marginal characteristics (fat tailedness, skewness etc.). It is the rst time in time series models with speci ed marginal distribution, a nonparametric speci cation is used. Through a Copula distribution, the marginal aspect are separated and the information contained within the order statistics allow to efficiently model a discretelyvaried time series. The estimation is done through Bayesian method. The method is invariant to any copula family and for any level of heterogeneity in the random variable. Using count times series of weekly rearm homicides in Cape Town, South Africa, we show our method efficiently estimates the copula parameter representing the firstorder Markov chain transition density. Key words: Bayesian copula ; discrete data ; order statistics ; semiparametric ; time series. JEL classification: C11 ; C14 ; C20 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:wrk:warwec:1051&r=ecm 
By:  Hisayuki Tsukuma (Faculty of Medicine, Toho University); Tatsuya Kubokawa (Faculty of Economics, The University of Tokyo) 
Abstract:  The problem of estimating a covariance matrix in multivariate linear regression models is addressed in a decisiontheoretic framework. Although a standard loss function is the Stein loss, it is not available in the case of a high dimension. In this paper, a new type of a quadratic loss function, called the intrinsic loss, is suggested, and unified dominance results are derived under the loss, irrespective of order of the dimension, the sample size and the rank of the regression coefficients matrix. Especially, using the SteinHaff identity, we develop a key inequality which is useful for constructing a truncated and improved estimator based on the information contained in the sample means or the ordinary least squares estimator of the regression coefficients. 
Date:  2014–08 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2014cf937&r=ecm 
By:  Tatsuya Kubokawa (Faculty of Economics, The University of Tokyo); Éric Marchand (Université de Sherbrooke, Departement de mathématiques); William E. Strawderman (Rutgers University, Department of Statistics and Biostatistics,) 
Abstract:  We consider minimax shrinkage estimation of a location vector of a spherically symmetric distribution under a loss function which is a concave function of the usual squared error loss. In particular for distributions which are scale mixtures of normals (and somewhat more generally), and for concave loss functions whose derivatives are completely monotone (and somewhat more generally), we give classes of minimax shrinkage estimators where the shrinkage constants are larger than those currently in the literature. 
Date:  2014–07 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2014cf936&r=ecm 
By:  Gian P. Cervellera; Marco P. Tucci 
Abstract:  This paper con?rms that, as originally reported in Seneta (2004, p. 183), it is impossible to replicate Madan et al.?s (1998) results using log daily returns on S&P 500 Index from January 1992 to September 1994. This failure leads to a close investigation of the computational problems associated with ?nding maximum likelihood estimates of the parameters of the popular VG model. Both standard econometric software, such as R, and nonstandard optimization software, such as Ezgrad described in Tucci (2002), are used. The complexity of the loglikelihood function is studied. It is shown that it looks very complicated, with many local optima, and may be incredibly sensitive to very small changes in the sample used. Adding or removing a single observation may cause huge changes both in the maximum of the loglikelihood function and in the estimated parameter values. 
Keywords:  VarianceGamma, log stock returns, maximum likelihood estimation, globally optimizing procedures 
JEL:  C58 C61 C63 
Date:  2014–10 
URL:  http://d.repec.org/n?u=RePEc:usi:wpaper:702&r=ecm 
By:  Joo, Joonhwi (University of Chicago); LaLonde, Robert J. (Harris School, University of Chicago) 
Abstract:  This paper uses the control function to develop a framework for testing for selection bias. The idea behind our framework is if the usual assumptions hold for matching or IV estimators, the control function identifies the presence and magnitude of potential selection bias. Averaging this correction term with respect to appropriate weights yields the degree of selection bias for alternative treatment effects of interest. One advantage of our framework is that it motivates when is appropriate to use more efficient estimators of treatment effects, such as those based on least squares or matching. Another advantage of our approach is that it provides an estimate of the magnitude of the selection bias. We also show how this estimate can help when trying to infer program impacts for program participants not covered by LATE estimates. 
Keywords:  selection bias, program evaluation, average treatment effects 
JEL:  C21 C26 D04 
Date:  2014–09 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp8455&r=ecm 
By:  Matteo Barigozzi; Marc Hallin 
Keywords:  volatility; dynamic factor models; block structure 
JEL:  C32 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/177444&r=ecm 
By:  Qin, Duo 
Abstract:  This paper investigates the nature of the IV method for tackling endogeneity. By tracing the rise and fall of the method in macroeconometrics and its subsequent revival in microeconometrics, it pins the method down to an implicit model respecification device  breaking the circular causality of simultaneous relations by redefining it as an asymmetric one conditioning on a nonoptimal conditional expectation of the assumed endogenous explanatory variable, thus rejecting that variable as a valid conditional variable. The revealed nature explains why the IV route is popular for models where endogeneity is superfluous whereas measurement errors are of the key concern. 
Keywords:  endogeneity,instrumental variables,simultaneity,omitted variable bias,multicollinearity 
JEL:  B23 C13 C18 C50 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:zbw:ifwedp:201442&r=ecm 
By:  C. Marsilli 
Abstract:  In shortterm forecasting, it is essential to take into account all available information on the current state of the economic activity. Yet, the fact that various time series are sampled at different frequencies prevents an efficient use of available data. In this respect, the MixedData Sampling (MIDAS) model has proved to outperform existing tools by combining data series of different frequencies. However, major issues remain regarding the choice of explanatory variables. The paper first addresses this point by developing MIDAS based dimension reduction techniques and by introducing two novel approaches based on either a method of penalized variable selection or Bayesian stochastic search variable selection. These features integrate a crossvalidation procedure that allows automatic insample selection based on recent forecasting performances. Then the developed techniques are assessed with regards to their forecasting power of US economic growth during the period 20002013 using jointly daily and monthly data. Our model succeeds in identifying leading indicators and constructing an objective variable selection with broad applicability. 
Keywords:  Forecasting, Mixed frequency data, MIDAS, Variable selection, GDP. 
JEL:  C53 E37 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:bfr:banfra:520&r=ecm 