
on Econometrics 
By:  Doko Tchatoka, Firmin; Dufour, JeanMarie 
Abstract:  We provide a generalization of the AndersonRubin (AR) procedure for inference on parameters which represent the dependence between possibly endogenous explanatory variables and disturbances in a linear structural equation (endogeneity parameters). We focus on secondorder dependence and stress the distinction between regression and covariance endogeneity parameters. Such parameters have intrinsic interest (because they measure the effect of "common factors" which induce simultaneity) and play a central role in selecting an estimation method (because they determine "simultaneity biases" associated with leastsquares methods). We observe that endogeneity parameters may not be identifiable and we give the relevant identification conditions. We develop identificationrobust finitesample tests for joint hypotheses involving structural and regression endogeneity parameters, as well as marginal hypotheses on regression endogeneity parameters. For Gaussian errors, we provide tests and confidence sets based on standardtype Fisher critical values. For a wide class of parametric nonGaussian errors (possibly heavytailed), we also show that exact Monte Carlo procedures can be applied using the statistics considered. As a special case, this result also holds for usual ARtype tests on structural coefficients. For covariance endogeneity parameters, we supply an asymptotic (identificationrobust) distributional theory. Tests for partial exogeneity hypotheses (for individual potentially endogenous explanatory variables) are covered as instances of the class of proposed procedures. The proposed procedures are applied to two empirical examples: the relation between trade and economic growth, and the widely studied problem of returns to education. 
Keywords:  Identificationrobust confidence sets; endogeneity; ARtype statistic; projectionbased techniques; partial exogeneity test 
JEL:  C3 C52 C12 C15 
Date:  2012–08–16 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:40695&r=ecm 
By:  LeYu Chen (Institute for Fiscal Studies and Academia Sinica); Jerzy Szroeter 
Abstract:  This paper proposes a class of originsmooth approximators of indicators underlying the sumofnegativepart statistic for testing multiple inequalities. The need for simulation or bootstrap to obtain test critical values is thereby obviated. A simple procedure is enabled using fixed critical values. The test is shown to have correct asymptotic size in the uniform sense that supremum finitesample rejection probability over nullrestricted data distributions tends asymptotically to nominal significance level. This applies under weak assumptions allowing for estimator covariance singularity. The test is unbiased for a wide class of local alternatives. A new theorem establishes directions in which the test is locally most powerful. The proposed procedure is compared with predominant existing tests in structure, theory and simulation. This paper is a revised version of CWP13/09. 
Keywords:  Test, Multiple inequalities, Onesided hypothesis, Composite null, Binding constraints, Asymptotic exactness, Covariance singularity, Indicator smoothing 
JEL:  C1 C4 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:16/12&r=ecm 
By:  Delgado, Miguel A.; Escanciano, Juan Carlos 
Abstract:  This article proposes a nonparametric test of monotonicity for conditional distributions and its moments.Unlike previous proposals, our method does not require smooth estimation of the derivatives of nonparametric curves. Distinguishing features of our approach are that critical values are pivotal under the null in finite samples and that the test is invariant to any monotonic continuous transformation of the explanatory variable. The test statistic is the supnorm of the difference between the empirical copula function and its least concave majorant with respect to the explanatory variable coordinate. The resulting test is able to detect local alternatives converging to the null at the parametric rate n−1/2, with n the sample size. The finite sample performance of the test is examined by means of a Monte Carlo experiment and an application to testing intergenerational income mobility. 
Keywords:  Stochastic monotonicity; Conditional moments; Least concave majorant; Copula process; Distributionfree in finite samples; Tests invariant to monotone transforms; 
JEL:  C14 C15 
Date:  2012–02–13 
URL:  http://d.repec.org/n?u=RePEc:ner:carlos:info:hdl:10016/15031&r=ecm 
By:  Peter Malec; Melanie Schienle; ; 
Abstract:  Standard fixed symmetric kernel type density estimators are known to encounter problems for positive random variables with a large probability mass close to zero. We show that in such settings, alternatives of asymmetric gamma kernel estimators are superior but also differ in asymptotic and finite sample performance conditional on the shape of the density near zero and the exact form of the chosen kernel. We therefore suggest a refined version of the gamma kernel with an additional tuning parameter according to the shape of the density close to the boundary. We also provide a datadriven method for the appropriate choice of the modified gamma kernel estimator. In an extensive simulation study we compare the performance of this refined estimator to standard gamma kernel estimates and standard boundary corrected and adjusted fixed kernels. We find that the finite sample performance of the proposed new estimator is superior in all settings. Two empirical applications based on highfrequency stock trading volumes and realized volatility forecasts demonstrate the usefulness of the proposed methodology in practice. 
Keywords:  Kernel density estimation; boundary correction; asymmetric kernel 
JEL:  C14 C51 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012047&r=ecm 
By:  Yong Li (Hanqing Advanced Institute of Economics and Finance, Renmin University of China); Tao Zeng (School of Economics and Sim Kee Boon Institute for Financial Economics, Singapore Management University); Jun Yu (Sim Kee Boon Institute for Financial Economics, School of Economics and Lee Kong Chian School of Business) 
Abstract:  It is shown in this paper that the data augmentation technique undermines the theoretical underpinnings of the deviance information criterion (DIC), a widely used information criterion for Bayesian model comparison, although it facilitates parameter estimation for latent variable models via Markov chain Monte Carlo (MCMC) simulation. Data augmentation makes the likelihood function nonregular and hence invalidates the standard asymptotic arguments. A new information criterion, robust DIC (RDIC), is proposed for Bayesian comparison of latent variable models. RDIC is shown to be a good approximation to DIC without data augmentation. While the later quantity is difficult to compute, the expectation  maximization (EM) algorithm facilitates the computation of RDIC when the MCMC output is available. Moreover, RDIC is robust to nonlinear transformations of latent variables and distributional representations of model specification. The proposed approach is illustrated using several popular models in economics and finance. 
Keywords:  AIC; DIC; EM Algorithm; Latent variable models; Markov Chain Monte Carlo. 
JEL:  C11 C12 G12 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:siu:wpaper:302012&r=ecm 
By:  Delgado, Miguel A.; Velasco, Carlos 
Abstract:  We propose an asymptotically distributionfree transform of the sample autocorrelations of residuals in general parametric time series models, possibly nonlinear in variables. The residuals autocorrelation function is the basic model checking tool in time series analysis, but it is not useful when its distribution is incorrectly approximated because the effects of parameter estimation and/or higherorder serial dependence have not been taken into account. The limiting distribution of the residuals sample autocorrelations may be difficult to derive, particularly when the underlying innovations are uncorrelated but not independent. In contrast, our proposal is easily implemented in fairly general contexts and the resulting transformed sample autocorrelations are asymptotically distributed as independent standard normals when innovations are uncorrelated, providing an useful and intuitive device for time series model checking in the presence of estimated parameters. We also discuss in detail alternatives to the classical Box–Pierce test, showing that our transform entails no efficiency loss under Gaussianity in the direction of MA and AR departures from the white noise hypothesis, as well as alternatives to Bartlett’s Tpprocess test. The finitesample performance of the procedures is examined in the context of a Monte Carlo experiment for the new goodnessoffit tests discussed in the article. The proposed methodology is applied to modeling the autocovariance structure of the wellknown chemical process temperature reading data already used for the illustration of other statistical procedures. Additional technical details are included in a supplemental material online. 
Keywords:  Higherorder serial dependence; Local alternatives; Long memory; Model checking; Nonlinear in variables models; Recursive residuals; 
Date:  2012–01–24 
URL:  http://d.repec.org/n?u=RePEc:ner:carlos:info:hdl:10016/15032&r=ecm 
By:  Micha Mandel; Yosef Rinott 
Abstract:  A population that can be joined at a known sequence of discrete times is sampled crosssectionally, and the sojourn times of individuals in the sample are observed. It is well known that crosssectioning leads to lengthbias, but less well known that it may result also in dependence among the observations, which is often ignored. It is therefore important to understand and to account for this dependence when estimating the distribution of sojourn times in the population. In this paper, we study conditions under which observed sojourn times are independent and conditions under which treating observations as independent, using the product of marginals in spite of dependence, results in proper inference. The latter is known as the Composite Likelihood approach. We study parametric and nonparametric inference based on Composite Likelihood, and provide conditions for consistency, and further asymptotic properties, including normal and nonnormal distributional limits of estimators. We show that Composite Likelihood leads to good estimators under certain conditions, and illustrate that it may fail without them. The theoretical study is supported by simulations. We apply the proposed methods to two data sets collected by crosssectional designs: data on hospitalization time after bowel and hernia surgeries, and data on service times at our university. 
Keywords:  Discrete entrance process, Length bias, Poisson cohort distribution, Truncation 
Date:  2012–07–09 
URL:  http://d.repec.org/n?u=RePEc:huj:dispap:dp614&r=ecm 
By:  Tatsuya Kubokawa (Faculty of Economics, University of Tokyo); Akira Inoue (Graduate School of Economics, University of Tokyo) 
Abstract:  The problem of estimating covariance and precision matrices of multivariate normal distributions is addressed when both the sample size and the dimension of variables are large. The estimation of the precision matrix is important in various statistical inference including the Fisher linear discriminant analysis, confidence region based on the Mahalanobis distance and others. A standard estimator is the inverse of the sample covariance matrix, but it may be instable or can not be defined in the high dimension. Although (adaptive) ridge type estimators are alternative procedures which are useful and stable for large dimension. However, we are faced with questions about how to choose ridge parameters and their estimators and how to set up asymptotic order in ridge functions in high dimensional cases. In this paper, we consider general types of ridge estimators for covariance and precision matrices, and derive asymptotic expansions of their risk functions. Then we suggest the ridge functions so that the second order terms of risks of ridge estimators are smaller than those of risks of the standard estimators. 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2012cf855&r=ecm 
By:  Rosen Azad Chowdhury; Bill Russell 
Abstract:  The effects of structural breaks in dynamic panels are more complicated than in time series models as the bias can be either negative or positive. This paper focuses on the effects of mean shifts in otherwise stationary processes within an instrumental variable panel estimation framework. We show the sources of the bias and a Monte Carlo analysis calibrated on United States bank lending data demonstrates the size of the bias for a range of autoregressive parameters. We also propose additional moment conditions that can be used to reduce the biases caused by shifts in the mean of the data. 
Keywords:  Dynamic panel estimators, mean shifts/structural breaks, Monte Carlo Simulation 
JEL:  C23 C22 C26 
Date:  2012–06 
URL:  http://d.repec.org/n?u=RePEc:dun:dpaper:268&r=ecm 
By:  Tirthankar Chakravarty (Department of Economics, UC San Diego) 
Abstract:  IV estimators of parameters in single equation structural models, like 2SLS and the LIML, are the most commonly used econometric estimators. Hausmantype tests are commonly used to choose between OLS and IV estimators. However, recent research has revealed troublesome size properties of Wald tests based on these pretest estimators. These problems can be circumvented by usage of shrinkage estimators, particularly JamesStein estimators. We introduce the ivshrink command which encompasses nearly 20 distinct variants of the shrinkagetype estimators proposed in the econometrics literature, based on optimal risk properties, including fixed (kclass estimators are a special case) and datadependent shrinkage estimators (random convex combinations of OLS and IV estimators, for example). Analytical standard errors, to be used in Waldtype tests are provided where appropriate, and bootstrap standard errors are reported otherwise. Where the variancecovariance matrices of the resulting estimators are expected to be degenerate, options for matrix norm regularization are also provided. We illustrate the techniques using a widely used dataset in the econometric literature. 
Date:  2012–08–01 
URL:  http://d.repec.org/n?u=RePEc:boc:scon12:22&r=ecm 
By:  Zhao, Yunfei; Marsh, Thomas L.; Li, Huixin 
Abstract:  This study makes an empirical comparison of estimators for censored equations using Monte Carlo simulation. The underlying data generation process is rarely known in practice. From the viewpoint of regression, both ordinary censoring rule and sample selection rule are logical rules of censoring. Furthermore, a mixed censoring rule is also possible to govern underlying data generation process. Therefore, it is valuable to examine whether estimators are robust to variations in the assumptions of censoring rules. Five estimators are examined, estimators for ordinary censoring rules include method of simulated scores, Bayesian estimation, and expectation maximization; estimators for sample selection rules include multivariate Heckman twostep method, and Shonkwiler  Yen twostep method. According to our findings, generally a substantial difference exists in the performance of estimators, and hence the choice of estimator appears to be of importance. Apart from difference in performance, estimates from all procedures are reasonably close to estimated parameters. 
Keywords:  Monte Carlo Simulation, Method of Simulated Scores, Bayesian Estimation, Expectations Maximization, TwoStep Estimation, Consumer/Household Economics, Demand and Price Analysis, Research Methods/ Statistical Methods, 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea12:129166&r=ecm 
By:  Tatiana Komarova 
Abstract:  In semiparametric binary response models, support conditions on the regressors are required to guarantee point identification of the parameter of interest. For example,one regressor is usually assumed to have continuous support conditional on the other regressors. In some instances, such conditions have precluded the use of these models; in others, practitioners have failed to consider whether the conditions are satisfied in their data. This paper explores the inferential question in these semiparametric models when the continuous support condition is not satisfied and all regressors have discrete support. I suggest a recursive procedure that finds sharp bounds on the components of the parameter of interest and outline several applications, focusing mainly on the models under the conditional median restriction, as in Manski (1985). After deriving closedform bounds on the components of the parameter, I show how these formulas can help analyze cases where one regressor's support becomes increasingly dense. Furthermore, I investigate asymptotic properties of estimators of the identification set. I describe a relation between the maximum score estimation and support vector machines and also propose several approaches to address the problem of empty identification sets when a model is misspecified. Finally, I present a Monte Carlo experiment and an empirical illustration to compare several estimation techniques. 
Keywords:  Binary response models, Discrete regressors, Partial identification, Misspecification,Support vector machines 
JEL:  C2 C10 C14 C25 
Date:  2012–05 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:em/2012/559&r=ecm 
By:  Jakob Söhl; Mathias Trabs; ; 
Abstract:  We obtain a uniform central limit theorem with square root n rate on the assumption that the smoothness of the functionals is larger than the illposedness of the problem, which is given by the polynomial decay rate of the characteristic function of the error. The limit distribution is a generalized Brownian bridge with a covariance structure that depends on the characteristic function of the error and on the functionals. The proposed estimators are optimal in the sense of semiparametric efficiency. The class of linear functionals is wide enough to incorporate the estimation of distribution functions. The proofs are based on smoothed empirical processes and mapping properties of the deconvolution operator. 
Keywords:  Deconvolution, Donsker theorem, Efficiency, Distribution function, Smoothed empirical processes, Fourier multiplier 
JEL:  C14 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012046&r=ecm 
By:  Geert Mesters (Netherlands Institute for the Study of Crime and Law Enforcement, and VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam) 
Abstract:  An exact maximum likelihood method is developed for the estimation of parameters in a nonlinear nonGaussian dynamic panel data model with unobserved random individualspecific and timevarying effects. We propose an estimation procedure based on the importance sampling technique. In particular, a sequence of conditional importance densities is derived which integrates out all random effects from the joint distribution of endogenous variables. We disentangle the integration over both the crosssection and the time series dimensions. The estimation method facilitates the flexible modeling of large panels in both dimensions. We evaluate the method in a Monte Carlo study for dynamic panel data models with observations from the Student's <i>t</i> distribution. We finally present an extensive empirical study into the interrelationships between the economic growth figures of countries listed in the Penn World Tables. It is shown that our dynamic panel data model can provide an insightful analysis of common and heterogeneous features in worldwide economic growth. 
Keywords:  Panel data; NonGaussian; Importance sampling; Random effects; Student's t; Economic growth 
JEL:  C33 C51 F44 
Date:  2012–02–06 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20120009&r=ecm 
By:  Joshua C.C. Chan; Justin L. Tobias 
Abstract:  Estimation in models with endogeneity concerns typically begins by searching for instruments. This search is inherently subjective and identification is generally achieved upon imposing the researcher's strong prior belief that such variables have no conditional impacts on the outcome. Results obtained from such analyses are necessarily conditioned upon the untestable opinions of the researcher, and such beliefs may not be widely shared. In this paper we, like several studies in the recent literature, employ a Bayesian approach to estimation and inference in models with endogeneity concerns by imposing weaker prior assumptions than complete excludability. When allowing for instrument imperfection of this type, the model is only partially identified, and as a consequence, standard estimates obtained from the Gibbs simulations can be unacceptably imprecise. We thus describe a substantially improved \semianalytic" method for calculating parameter marginal posteriors of interest that only requires use of the wellmixing simulations associated with the identifiable model parameters and the form of the conditional prior. Our methods are also applied in an illustrative application involving the impact of Body Mass Index (BMI) on earnings. 
JEL:  C11 I10 J11 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:acb:cbeeco:2012580&r=ecm 
By:  Joel Horowitz (Institute for Fiscal Studies and Northwestern University); Jian Huang 
Abstract:  We consider estimation of a linear or nonparametric additive model in which a few coefficients or additive components are â€œlargeâ€ and may be objects of substantive interest, whereas others are â€œsmallâ€ but not necessarily zero. The number of small coefficients or additive components may exceed the sample size. It is not known which coefficients or components are large and which are small. The large coefficients or additive components can be estimated with a smaller meansquare error or integrated meansquare error if the small ones can be identified and the covariates associated with them dropped from the model. We give conditions under which several penalized least squares procedures distinguish correctly between large and small coefficients or additive components with probability approaching 1 as the sample size increases. The results of Monte Carlo experiments and an empirical example illustrate the benefits of our methods. 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:17/12&r=ecm 
By:  Enno Mammen; Byeong U. Park; Melanie Schienle; 
Abstract:  We give an overview over smooth backtting type estimators in additive models. Moreover we il lustrate their wide applicability in models closely related to additive models such as nonparametric regression with dependent error variables where the errors can be transformed to white noise by a linear transformation, nonparametric regression with repeatedly measured data, nonparametric panels with xed eects, simultaneous nonparametric equation models, and non and semiparamet ric autoregression and GARCHmodels. We also discuss extensions to varying coecient models, additive models with missing observations, and the case of nonstationary covariates. 
Keywords:  smooth backfitting, additive models 
JEL:  C14 C30 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012045&r=ecm 
By:  Karlsson, Sune (Department of Business, Economics, Statistics and Informatics) 
Abstract:  Prepared for the Handbook of Economic Forecasting, vol 2 <p> This chapter reviews Bayesian methods for inference and forecasting with VAR models. Bayesian inference and, by extension, forecasting depends on numerical methods for simulating from the posterior distribution of the parameters and spe cial attention is given to the implementation of the simulation algorithm. 
Keywords:  Markov chain Monte Carlo; Structural VAR; Cointegration; Condi tional forecasts; Timevarying parameters; Stochastic volatility; Model selection; Large VAR 
JEL:  C11 C32 C53 
Date:  2012–08–04 
URL:  http://d.repec.org/n?u=RePEc:hhs:oruesi:2012_012&r=ecm 
By:  Philip Hans Franses (Erasmus University Rotterdam); Rianne Legerstee (Erasmus University Rotterdam); Richard Paap (Erasmus University Rotterdam) 
Abstract:  We propose a new and simple methodology to estimate the loss function associated with experts' forecasts. Under the assumption of conditional normality of the data and the forecast distribution, the asymmetry parameter of the linlin and linex loss function can easily be estimated using a linear regression. This regression also provides an estimate for potential systematic bias in the forecasts of the expert. The residuals of the regression are the input for a test for the validity of the normality assumption. We apply our approach to a large data set of SKUlevel sales forecasts made by experts and we compare the outcomes with those for statistical modelbased forecasts of the same sales data. We find substantial evidence for asymmetry in the loss functions of the experts, with underprediction penalized more than overprediction. 
Keywords:  model forecasts; expert forecasts; loss functions; asymmetry; econometric models 
JEL:  C50 C53 
Date:  2011–12–16 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110177&r=ecm 
By:  Matteo Barigozzi (London School of Economics); Roxana Halbleib (University of Konstanz); David Veredas (Université Libre de Bruxelles) 
Abstract:  The asymptotic efficiency of indirect estimation methods, such as the efficient method of moments and indirect inference, depends on the choice of the auxiliary model. To date, this choice has been somewhat ad hoc and based on an educated guess. In this article we introduce a class of information criteria that helps the user to optimize the choice between nested and non–nested auxiliary models. They are the indirect analogues of the widely used Akaike–type criteria. A thorough Monte Carlo study based on two simple and illustrative models shows the usefulness of the criteria. 
Keywords:  Indirect inference, efficient method of moments, auxiliary model, information criteria, asymptotic efficiency 
JEL:  C13 C52 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:bde:wpaper:1229&r=ecm 
By:  Siem Jan Koopman (VU University Amsterdam); Thuy Minh Nguyen (Deutsche Bank, London) 
Abstract:  We show that efficient importance sampling for nonlinear nonGaussian state space models can be implemented by computationally efficient Kalman filter and smoothing methods. The result provides some new insights but it primarily leads to a simple and fast method for efficient importance sampling. A simulation study and empirical illustration provide some evidence of the computational gains. 
Keywords:  Kalman filter; Monte Carlo maximum likelihood; Simulation smoothing 
JEL:  C32 C51 
Date:  2012–01–12 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20120008&r=ecm 
By:  Edward C. Norton 
Abstract:  Although independent unobserved heterogeneity—variables that affect the dependent variable but are independent from the other explanatory variables of interest—do not affect the point estimates or marginal effects in least squares regression, they do affect point estimates in nonlinear models such as logit and probit models. In these nonlinear models, independent unobserved heterogeneity changes the arbitrary normalization of the coefficients through the error variance. Therefore, any statistics derived from the estimated coefficients change when additional, seemingly irrelevant, variables are added to the model. Odds ratios must be interpreted as conditional on the data and model. There is no one odds ratio; each odds ratio estimated in a multivariate model is conditional on the data and model in a way that makes comparisons with other results difficult or impossible. This paper provides new Monte Carlo and graphical insights into why this is true, and new understanding of how to interpret fixed effects models, including case control studies. Marginal effects are largely unaffected by unobserved heterogeneity in both linear regression and nonlinear models, including logit and probit and their multinomial and ordered extensions. 
JEL:  C25 I19 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:18252&r=ecm 
By:  Zhu, Ke 
Abstract:  This paper investigates the joint limiting distribution of the residual autocorrelation functions and the absolute residual autocorrelation functions of ARMAGARCH model. This leads a mixed portmanteau test for diagnostic checking of the ARMAGARCH model fitted by using the quasimaximum exponential likelihood estimation approach in Zhu and Ling (2011). Simulation studies are carried out to examine our asymptotic theory, and assess the performance of this mixed test and other two portmanteau tests in Li and Li (2008). A real example is given. 
Keywords:  ARMAGARCH model; LAD estimator; mixed portmanteau test; model diagnostics; quasimaximum exponential likelihood estimator 
JEL:  C10 C52 
Date:  2012–07–31 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:40382&r=ecm 
By:  Patrick E. McCabe; Marco Cipriani; Michael Holscher; Antoine Martin 
Abstract:  This paper advances the theory and methodology of signal extraction by introducing asymptotic and finite sample formulas for optimal estimators of signals in nonstationary multivariate time series. Previous literature has considered only univariate or stationary models. However, in current practice and research, econometricians, macroeconomists, and policymakers often combine related series  that may have stochastic trendsto attain more informed assessments of basic signals like underlying inflation and business cycle components. Here, we use a very general model structure, of widespread relevance for time series econometrics, including flexible kinds of nonstationarity and correlation patterns and specific relationships like cointegration and other common factor forms. First, we develop and prove the generalization of the wellknown WienerKolmogorov formula that maps signalnoise dynamics into optimal estimators for biinfinite series. Second, this paper gives the first explicit treatment of finitelength multivariate time series, providing a new method for computing signal vectors at any time point, unrelated to Kalman filter techniques; this opens the door to systematic study of near endpoint estimators/filters, by revealing how they jointly depend on a function of signal location and parameters. As an illustration we present econometric measures of the trend in total inflation that make optimal use of the signal content in core inflation. 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgfe:201247&r=ecm 
By:  Bartolucci, Francesco; Lupparelli, Monia 
Abstract:  In the context of multilevel longitudinal data, where sample units are collected in clusters, an important aspect that should be accounted for is the unobserved heterogeneity between sample units and between clusters. For this aim we propose an approach based on nested hidden (latent) Markov chains, which are associated to every sample unit and to every cluster. The approach allows us to account for the mentioned forms of unobserved heterogeneity in a dynamic fashion; it also allows us to account for the correlation which may arise between the responses provided by the units belonging to the same cluster. Given the complexity in computing the manifest distribution of these response variables, we make inference on the proposed model through a composite likelihood function based on all the possible pairs of subjects within every cluster. The proposed approach is illustrated through an application to a dataset concerning a sample of Italian workers in which a binary response variable for the worker receiving an illness benefit was repeatedly observed. 
Keywords:  composite likelihood; EM algorithm; latent Markov model; pairwise likelihood 
JEL:  C10 C33 
Date:  2012–08–09 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:40588&r=ecm 
By:  Adam E Clements (QUT); Mark Doolan (QUT); Stan Hurn (QUT); Ralf Becker (University of Manchester) 
Abstract:  Techniques for evaluating and selecting multivariate volatility forecasts are not yet as well understood as their univariate counterparts. This paper considers the ability of different loss functions to discriminate between a competing set of forecasting models which are subsequently applied in a portfolio allocation context. It is found that a likelihood based loss function outperforms it competitors including those based on the given portfolio application. This result indicates that the particular application of forecasts is not necessarily the most effective approach under which to select models. 
Keywords:  Multivariate volatility, portfolio allocation, forecast evaluation, model selection, model confidence set 
JEL:  C22 G00 
Date:  2012–08–09 
URL:  http://d.repec.org/n?u=RePEc:qut:auncer:2012_8&r=ecm 
By:  Javier Hualde (Departamento de EconomíaUPNA) 
Abstract:  This paper proposes an estimator of the cointegrating rank of a potentially cointegrated multivariate fractional process. Our setting is very flexible, allowing the individual observable processes to have different integration orders. The proposed method is automatic and can be also employed to infer the dimensions of possible cointegrating subspaces, which are characterized by special directions in the cointegrating space which generate cointegrating errors with smaller integration orders, increasing the “achievement” of the cointegration analysis. A Monte Carlo experiment of finite sample performance and an empirical analysis are included. 
Keywords:  fractional integration, cointegrating rank, cointegrating space and subspaces. 
JEL:  C32 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:nav:ecupna:1205&r=ecm 
By:  Falk Brauning (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam) 
Abstract:  We explore a new approach to the forecasting of macroeconomic variables based on a dynamic factor state space analysis. Key economic variables are modeled jointly with principal components from a large time series panel of macroeconomic indicators using a multivariate unobserved components time series model. When the key economic variables are observed at a low frequency and the panel of macroeconomic variables is at a high frequency, we can use our approach for both nowcasting and forecasting purposes. Given a dynamic factor model as the data generation process, we provide Monte Carlo evidence for the finitesample justification of our parsimonious and feasible approach. We also provide empirical evidence for a U.S. macroeconomic dataset. The unbalanced panel contain quarterly and monthly variables. The forecasting accuracy is measured against a set of benchmark models. We conclude that our dynamic factor state space analysis can lead to higher forecasting precisions when panel size and time series dimensions are moderate. 
Keywords:  Kalman filter; Mixed frequency; Nowcasting; Principal components; State space model; Unobserved Components Time Series Model 
JEL:  C33 C53 E17 
Date:  2012–04–20 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20120042&r=ecm 
By:  Joachim Freyberger 
Abstract:  This paper develops asymptotic theory for estimated parameters in differentiated product demand systems with a fixed number of products, as the number of markets T increases, taking into account that the market shares are approximated by Monte Carlo integration. It is shown that the estimated parameters are vT consistent and asymptotically normal as long as the number of simulations R grows fast enough relative to T. Monte Carlo integration induces both additional variance as well additional bias terms in the asymptotic expansion of the estimator. If R does not increase as fast as T, the leading bias term dominates the leading variance term and the asymptotic distribution might not be centered at 0. This paper suggests methods to eliminate the leading bias term from the asymptotic expansion. Furthermore, an adjustment to the asymptotic variance is proposed that takes the leading variance term into account. Monte Carlo results show that these adjustments, which are easy to compute, should be used in applications to avoid severe undercoverage caused by the simulation error. 
Keywords:  Asymptotic theory 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:19/12&r=ecm 
By:  Insan Tunali (Department of Economics, Koç University); Emre Ekinci (Department of Business Administration, Universidad Carlos III de Madrid); Berk Yavuzoglu (Department of Economics, University of WisconsinMadison) 
Abstract:  We modify the Additively Nonignorable (AN) model of Hirano et. al. (2001) so that it is suitable for data collection efforts that have a short panel component. Our modification yields a convenient semiparametric bias correction framework for handling endogenous attrition and substitution behavior that can emerge when multiple visits to the same unit are planned. We apply our methodology to data from the Household Labor Force Survey (HLFS) in Turkey, which shares a key design feature (namely a rotating sample frame) of popular surveys such as the Current Population Survey and the European Union Labor Force Survey. The correction amounts to adjusting the observed joint distribution over the state space using reflation factors expressed as parametric functions of the states occupied in subsequent rounds. Unlike standard weighting schemes, our method produces a unique set of corrected joint probabilities that are consistent with the margins used for computing the published crosssection statistics. Inference about the nature of the bias is implemented via Bootstrap methods. Our empirical results show that attrition/substitution in HLFS is a statistically and substantially important concern. 
Keywords:  attrition; substitution; selectivity; short panel; rotating sample frame; labor force survey. 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:koc:wpaper:1220&r=ecm 
By:  Olesen, Ole B. (Department of Business and Economics) 
Abstract:  The assumption of a homothetic production function is often maintained in production economics. In this paper we explore the possibility of maintaining homotheticity within a nonparametric DEA framework. The main contribution of this paper is to use the approach suggested by Hanoch and Rothschild in 1972 to define a homothetic reference technology. We focus on the largest subset of data points that is consistent with such a homothetic production function. We use the HRapproach to define a piecewise linear homothetic convex reference technology. We propose this reference technology with the purpose of adding structure to the flexible nonparametric BCC DEA estimator. Motivation for why such additional structure sometimes is warranted is provided. An estimation procedure derived from the BCCmodel and from a maintained assumption of homotheticity is proposed. The performance of the estimator is analyzed using simulation. 
Keywords:  Data Envelopment Analysis (DEA); DEA based estimation of a homothetic production function; linear programming; convex hull estimation; isoquant estimation; polyhedral sets in intersection and sum form 
JEL:  C00 C40 C60 
Date:  2012–08–14 
URL:  http://d.repec.org/n?u=RePEc:hhs:sdueko:2012_014&r=ecm 
By:  Arie ten Cate 
Abstract:  <p>Mirror data are observations of bilateral variables such as trade from one country to another, reported by both countries. The efficient estimation of a bilateral variable from its mirror data, for example when compiling consistent international trade statistics, requires information about the accuracy of the reporters. </p><p>This paper discusses the simultaneous estimation of the accuracy of multiple reporters, from all mirror data. This requires a model with an identification restriction. Two models are presented, each with the same simple kind of identifying restriction. The inadequate treatment of this restriction in the literature might be an explanation for the limited presence of integrated international statistics.</p><p> </p><p> </p><p> </p>< p> </p><p> </p> 
JEL:  C82 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:cpb:discus:216&r=ecm 
By:  Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); Andre Lucas (VU University Amsterdam) 
Abstract:  We characterize the dynamic properties of Generalized Autoregressive Score (GAS) processes by identifying regions of the parameter space that imply stationarity and ergodicity. We show how these regions are affected by the choice of parameterization and scaling, which are key features of GAS models compared to other observation driven models. The Dudley entropy integral is used to ensure the nondegeneracy of such regions. Furthermore, we show how to obtain bounds for these regions in models for timevarying means, variances, or higherorder moments. 
Keywords:  Dudley integral; Durations; Higherorder models; Nonlinear dynamics; Timevarying parameters; Volatility 
JEL:  C13 C22 C58 
Date:  2012–06–22 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20120059&r=ecm 
By:  Taeyoung Doh; Michael Connolly 
Abstract:  To capture the evolving relationship between multiple economic variables, time variation in either coefficients or volatility is often incorporated into vector autoregressions (VARs). However, allowing time variation in coefficients or volatility without restrictions on their dynamic behavior can increase the number of parameters too much, making the estimation of such a model practically infeasible. For this reason, researchers typically assume that timevarying coefficients or volatility are not directly observed but follow random processes which can be characterized by a few parameters. The state space representation that links the transition of possibly unobserved state variables with observed variables is a useful tool to estimate VARs with timevarying coefficients or stochastic volatility. ; In this paper, we discuss how to estimate VARs with timevarying coefficients or stochastic volatility using the state space representation. We focus on Bayesian estimation methods which have become popular in the literature. As an illustration of the estimation methodology, we estimate a timevarying parameter VAR with stochastic volatility with the three U.S. macroeconomic variables including inflation, unemployment, and the longterm interest rate. Our empirical analysis suggests that the recession of 20072009 was driven by a particularly bad shock to the unemployment rate which increased its trend and volatility substantially. In contrast, the impacts of the recession on the trend and volatility of nominal variables such as the core PCE inflation rate and the tenyear Treasury bond yield are less noticeable. 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp1204&r=ecm 
By:  Ivan Canay (Institute for Fiscal Studies and Northwestern University); Andres Santos; Azeem Shaikh (Institute for Fiscal Studies and University of Chicago) 
Abstract:  This paper examines three distinct hypothesis testing problems that arise in the context of identification of some nonparametric models with endogeneity. The first hypothesis testing problem we study concerns testing necessary conditions for identification in some nonparametric models with endogeneity involving mean independence restrictions. These conditions are typically referred to as completeness conditions. The second and third hypothesis testing problems we examine concern testing for identification directly in some nonparametric models with endogeneity involving quantile independence restrictions. For each of these hypothesis testing problems, we provide conditions under which any test will have power no greater than size against any alternative. In this sense, we conclude that no nontrivial tests for these hypothesis testing problems exist. 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:18/12&r=ecm 
By:  Carlos Carrion; Nebiyou Tilahun; David Levinson (Nexus (Networks, Economics, and Urban Systems) Research Group, Department of Civil Engineering, University of Minnesota) 
Abstract:  Monte Carlo experiments are used to study the unbiasedness of several common random utility models for a proposed adaptive stated preference survey. This survey is used to study the influence of the knowledge of existing mode shares on travelers mode choice. Furthermore, the survey is applied to a sample of subjects selected from the University of Minnesota. The results indicate that the presence of mode shares in the mode choice model does influence the decision of travelers. The estimates are found to be biased by the Monte Carlo experiments. 
Keywords:  mode choice, mode shares, mixed logit, stated preference. 
JEL:  R41 C33 C35 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:nex:wpaper:mcasp&r=ecm 
By:  Sule Alan (Koc University and University of Cambridge); Kadir Atalay (University of Sydney); Thomas F. Crossley (Koc University, University of Cambridge and Institute for Fiscal Studies, London) 
Abstract:  First order conditions from the dynamic optimization problems of consumers and firms are important tools in empirical macroeconomics. When estimated on microdata these equations are typically linearized so standard IV or GMM methods can be employed to deal with the measurement error that is endemic to survey data. However, it has recently been argued that the approximation bias induced by linearization may be worse than the problems that linearization is intended to solve. This paper explores this issue in the context of consumption Euler equations. These equations form the basis of estimates of key macroeconomic parameters: the elasticity of intertemporal substitution (EIS) and relative prudence. We numerically solve and simulate 6 different lifecycle models, and then use the simulated data as the basis for a series of Monte Carlo experiments in which we consider the validity and relevance of conventional instruments, the consequences of different data sampling schemes, and the effectiveness of alternative estimation strategies. The firstorder Euler equation leads to biased estimates of the EIS, but that bias is perhaps not too large when there is a sufficient time dimension to the data, and sufficient variation in interest rates. A sufficient time dimension can only realistically be achieved with a synthetic cohort. Estimates are unlikely to be very precise. Bias will be worse the more impatient agents are. The second order Euler equation suffers from a weak instrument problem and offers no advantage over the firstorder approximation. 
Keywords:  Euler Equations, Measurement Error, Instrumental Variables, GMM. 
JEL:  E21 C20 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:koc:wpaper:1221&r=ecm 
By:  Julieta Fuentes; Pilar Poncela; Julio Rodríguez 
Abstract:  Factor models have been applied extensively for forecasting when high dimensional datasets are available. In this case, the number of variables can be very large. For instance, usual dynamic factor models in central banks handle over 100 variables. However, there is a growing body of the literature that indicates that more variables do not necessarily lead to estimated factors with lower uncertainty or better forecasting results. This paper investigates the usefulness of partial least squares techniques, that take into account the variable to be forecasted when reducing the dimension of the problem from a large number of variables to a smaller number of factors. We propose different approaches of dynamic sparse partial least squares as a means of improving forecast efficiency by simultaneously taking into account the variable forecasted while forming an informative subset of predictors, instead of using all the available ones to extract the factors. We use the wellknown Stock and Watson database to check the forecasting performance of our approach. The proposed dynamic sparse models show a good performance in improving the efficiency compared to widely used factor methods in macroeconomic forecasting. 
Keywords:  Factor Models, Forecasting, Large Datasets, Partial Least Squares, Sparsity, Variable Selection 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws122216&r=ecm 
By:  Jozef Barun\'ik; Nikhil Shenai; Filip \v{Z}ike\v{s} 
Abstract:  This paper introduces the MarkovSwitching Multifractal Duration (MSMD) model by adapting the MSM stochastic volatility model of Calvet and Fisher (2004) to the duration setting. Although the MSMD process is exponential {\beta}mixing as we show in the paper, it is capable of generating highly persistent autocorrelation. We study analytically and by simulation how this feature of durations generated by the MSMD process propagates to counts and realized volatility. We employ a quasimaximum likelihood estimator of the MSMD parameters based on the Whittle approximation and show that it is a computationally simple and fast alternative to the maximum likelihood estimator, and works for general MSMD specifications. Finally, we compare the performance of the MSMD model with competing short and longmemory duration models in an outofsample forecasting exercise based on price durations of three major foreign exchange futures contracts. The results of the comparison show that the longmemory models perform similarly and are superior to the shortmemory ACD models. 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1208.3087&r=ecm 
By:  Matteo Luciani; Libero Monteforte 
Abstract:  In this paper we propose to exploit the heterogeneity of forecasts produced by different model specifications to measure forecast uncertainty. Our approach is simple and intuitive. It consists in selecting all the models that outperform some benchmark model, and then to construct an empirical distribution of the forecasts produced by these models. We interpret this distribution as a measure of uncertainty. We perform a pseudo realtime forecasting exercise on a large database of Italian data from 1982 to 2009, showing case studies of our measure of uncertainty. 
Keywords:  Factor models, Model uncertainty, Forecast combination, Density forecast. 
JEL:  C13 C32 C33 C52 C53 
Date:  2012–05 
URL:  http://d.repec.org/n?u=RePEc:itt:wpaper:wp20125&r=ecm 
By:  R'emy Chicheportiche; JeanPhilippe Bouchaud 
Abstract:  Accurate Goodness of Fit tests for the extreme tails of empirical distributions is a very important issue, relevant in many contexts, including geophysics, insurance and finance. We have derived exact asymptotic results for a generalization of the KolmogorovSmirnov test, well suited to test these extreme tails. In passing, we have rederived and made more precise the result of [P. L. Krapivsky and S. Redner, Am. J. Phys. 64(5):546, 1996] concerning the survival probability of a diffusive particle in an expanding cage. 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1207.7308&r=ecm 
By:  Thomas Laurent; Tomasz Kozluk 
Abstract:  Uncertainty is inherent to forecasting and assessing the uncertainty surrounding a point forecast is as important as the forecast itself. Following Cornec (2010), a method to assess the uncertainty around the indicator models used at OECD to forecast GDP growth of the six largest member countries is developed, using quantile regressions to construct a probability distribution of future GDP, as opposed to mean point forecasts. This approach allows uncertainty to be assessed conditionally on the current state of the economy and is totally model based and judgement free. The quality of the computed distributions is tested against other approaches to measuring forecast uncertainty and a set of uncertainty indicators is constructed in order to help exploiting the most helpful information.<P>Mesure de l'incertitude sur les prévisions du PIB à l'aide de régressions quantiles<BR>L’incertitude est inhérente à la prévision, et évaluer l’incertitude autour d’une prévision est aussi important que la prévision ellemême. A la suite de Cornec (2010), une méthode pour évaluer l’incertitude autour des modèles d’indicateurs utilisés à l’OCDE pour prévoir la croissance des six plus grandes économies membres est développée, utilisant des régressions quantiles pour construire une distribution de probabilité du PIB future, plutôt qu’une prévision ponctuelle. Cette approche permet d’évaluer l’incertitude conditionnellement à l’état actuel de l’économie et est fondée sur le modèle, sans jugement. La qualité des distributions calculées est testée contre des approches alternatives de la mesure de l’incertitude, et un ensemble d’indicateurs d’incertitudes est construit pour aider à exploiter les informations les plus pertinentes. 
Keywords:  forecasting, uncertainty, GDP, quantile regression, prévisions, incertitude, PIB, régression quantile 
JEL:  C31 C53 
Date:  2012–07–06 
URL:  http://d.repec.org/n?u=RePEc:oec:ecoaaa:978en&r=ecm 