
on Econometrics 
By:  Deepa Dhume Datta; Wenxin Du 
Abstract:  The Newey and West (1987) estimator has become the standard way to estimate a heteroskedasticity and autocorrelation consistent (HAC) covariance matrix, but it does not immediately apply to time series with missing observations. We demonstrate that the intuitive approach to estimate the true spectrum of the underlying process using only the observed data leads to incorrect inference. Instead, we propose two simple consistent HAC estimators for time series with missing data. First, we develop the Amplitude Modulated estimator by applying the NeweyWest estimator and treating the missing observations as nonserially correlated. Secondly, we develop the Equal Spacing estimator by applying the NeweyWest estimator to the series formed by treating the data as equally spaced. We show asymptotic consistency of both estimators for inference purposes and discuss finite sample variance and bias tradeoff. In Monte Carlo simulations, we demonstrate that the Equal Spacing estimator is preferred in most cases due to its lower bias, while the Amplitude Modulated estimator is preferred for small sample size and low autocorrelation due to its lower variance. 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:1060&r=ecm 
By:  Stephen G. Donald (Department of Economics, University of Texas at Austin); YuChin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Robert P. Lieli (Department of Economics, Central European University, Budapest and the National Bank of Hungary) 
Abstract:  We propose inverse probability weighted estimators for the the local average treatment effect (LATE) and the local average treatment effect for the treated (LATT) under instrumental variable assumptions with covariates. We show that these estimators are asymptotically normal and effcient. When the (binary) instrument satisfies onesided noncompliance, we propose a Durbin WuHausmantype test of whether treatment assignment is unconfounded conditional on some observables. The test is based on the fact that under onesided noncompliance LATT coincides with the average treatment effect for the treated (ATT). We conduct Monte Carlo simulations to demonstrate, among other things, that part of the theoretical efficiency gain afforded by unconfoundedness in estimating ATT survives pretesting. We illustrate the practical implementation of the test on data from training programs administered under the Job Training Partnership Act. 
Keywords:  local average treatment effect, instrumental variables, unconfoundedness, inverse probability weighted estimation, nonparametric estimation 
JEL:  C12 C13 C14 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:sin:wpaper:12a017&r=ecm 
By:  James B. McDonald (Department of Economics, Brigham Young University); Hieu Nguyen (Department of Economics, Brigham Young University) 
Abstract:  Data censoring causes ordinary least squares estimators of linear models to be biased and inconsistent. The Tobit estimator yields consistent estimators in the presence of data censoring if the errors are normally distributed. However, nonnormality or heteroskedasticity results in the Tobit estimators being inconsistent. Various estimators have been proposed for circumventing the normality assumption. Some of these estimators include censored least absolute deviations (CLAD), symmetrically censored least squares (SCLS), and partially adaptive estimators. CLAD and SCLS will be consistent in the presence of heteroskedasticity; however, SCLS performs poorly in the presence of asymmetric errors. This paper extends the partially adaptive estimation approach to accommodate possible heteroskedasticity as well as nonnormality. A simulation study is used to investigate the estimators’ relative efficiency in these settings. The partially adaptive censored regression estimators have little efficiency loss for censored normal errors and appear to outperform the Tobit and semiparametric estimators for nonnormal error distributions and be less sensitive to the presence of heteroskedasticity. An empirical example is considered which supports these results. 
Keywords:  censored regression, Tobit, partially adaptive estimators, heteroskedasticity, nonnormality 
JEL:  C34 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:byu:byumcl:201209&r=ecm 
By:  Jean Jacod (University Paris VI); Mark Podolskij (Heidelberg University and CREATES) 
Abstract:  In this paper we present a test for the maximal rank of the matrixvalued volatility process in the continuous Itô semimartingale framework. Our idea is based upon a random perturbation of the original high frequency observations of an Itô semimartingale, which opens the way for rank testing. We develop the complete limit theory for the test statistic and apply it to various null and alternative hypotheses. Finally, we demonstrate a homoscedasticity test for the rank process. 
Keywords:  central limit theorem, high frequency data, homoscedasticity testing, Itô semimartingales, rank estimation, stable convergence. 
JEL:  C10 C13 C14 
Date:  2012–12–14 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201257&r=ecm 
By:  Pavlyuk, Dmitry 
Abstract:  This research is devoted to analysis of efficiency estimation in presence of spatial relationships and spatial heterogeneity in data. We presented a general specification of the spatial stochastic frontier model, which includes spatial lags, spatial autoregressive disturbances and spatial autoregressive inefficiencies. Maximum likelihood estimators are derived for two special cases of the spatial stochastic frontier. Smallsample properties of these estimators and comparison with a standard nonspatial estimator were implemented using a set of Monte Carlo experiments. Finally, we tested our estimators on a realworld data set of European airports and discovered significant spatial components in data. 
Keywords:  spatial stochastic frontier; maximum likelihood; efficiency; heterogeneity 
JEL:  C51 L93 C15 C31 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:43390&r=ecm 
By:  Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 132441020); Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 132441020); Long Liu (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 132441020) 
Abstract:  This paper studies the asymptotic properties of within groups kclass estimators in a panel data model with weak instruments. Weak instruments are characterized by the coefficients of the instruments in the reduced form equation shrinking to zero at a rate proportional to nTδ ; where n is the dimension of the crosssection and T is the dimension of the time series. Joint limits as (n,T )→∞ show that this within group kclass estimator is consistent if 0 ≤δ ≤ ½ and inconsistent if ½ ≤δ ≤ ∞. Key Words: Weak Instrument; Panel Data; fixed effects; Pitman drift localtozero JEL No. C13, C33 
Date:  2012–09 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:143&r=ecm 
By:  Jan F. KIVIET (Division of Economics, Nanyang Technological University, Singapore 637332, Singapore); Garry D.A. PHILLIPS (Cardiff Business School, Aberconway Building, Colum Drive, CF10 3EU, Cardiff, Wales, UK) 
Abstract:  In dynamic regression models conditional maximum likelihood (leastsquares) coe¢ cient and variance estimators are biased. From expansions of the coefficient variance and its estimator we obtain an approximation to the bias in variance es timation and a bias corrected variance estimator, for both the standard and a bias corrected coe¢ cient estimator. These enable a comparison of their mean squared errors to second order. We formally derive su¢ cient conditions for admissibility of these approximations. Illustrative numerical and simulation results are presented on bias reduction of coefficient and variance estimation for three relevant classes of ?rstorder autoregressive models, supplemented by e¤ects on mean squared er rors, test size and size corrected power. These indicate that substantial biases do occur in moderately large samples, but these can be mitigated substantially and may also yield mean squared error reduction. Crude asymptotic tests are cursed by huge size distortions. However, operational bias corrections of both the esti mates of coefficients and their estimated variance are shown to curb type I errors reasonably well. 
Keywords:  higherorder asymptotic expansions, bias correction, efficiency gains, lagged dependent variables, finite sample moments, size improvement 
JEL:  C13 C22 
Date:  2012–06 
URL:  http://d.repec.org/n?u=RePEc:nan:wpaper:1206&r=ecm 
By:  Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 132441020); Qu Feng (Nanyang Technological University); Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 132441020) 
Abstract:  It is well known that the standard Breusch and Pagan (1980) LM test for crossequation correlation in a SUR model is not appropriate for testing crosssectional dependence in panel data models when the number of crosssectional units (n) is large and the number of time periods (T) is small. In fact, a scaled version of this LM test was proposed by Pesaran (2004) and its finite sample bias was corrected by Pesaran, Ullah and Yamagata (2008). This was done in the context of a heterogeneous panel data model. This paper derives the asymptotic bias of this scaled version of the LM test in the context of a fixed effects homogeneous panel data model.This asymptotic bias is found to be a constant related to n and T, which suggests a simple bias corrected LM test for the null hypothesis. Additionally, the paper carries out some Monte Carlo experiments to compare the finite sample properties of this proposed test with existing tests for crosssectional dependence. 
Keywords:  LM Test; Crosssectional Dependence; Fixed Effects; High Dimensional Inference;John Test; Panel Data JEL Classification: C13; C33. 
Date:  2012–05 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:137&r=ecm 
By:  Stefan Hoderlein (Institute for Fiscal Studies and Boston College); Robert Sherman 
Abstract:  We study identification and estimation in a binary response model with random coefficients B allowed to be correlated with regressors X. Our objective is to identifiy the mean of the distribution of B and estimate a trimmed mean of this distribution. Like Imbens and Newey (2009), we use instruments Z and a control vector V to make X independent of B given V. A consequent conditional median restriction identifies the mean of B given V. Averaging over V identifies the mean of B. This leads to an analogous localisethenaverage approach to estimation. We estimate conditional means with localised smooth maximum score estimators and average to obtain a âˆšnconsistent and asymptotically normal estimator of a trimmed mean of the distribution of B. The method can be adapted to models with nonrandom coefficients to produce âˆšnconsistent and asymptotically normal estimators under the conditional median restrictions. We explore small sample performance through simulations, and present an application. 
Keywords:  Heterogeneity, Correlated Random Coefficients, Endogeneity, Binary Response Model, Instrumental Variables, Control Variables, Conditional Median Restrictions 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:42/12&r=ecm 
By:  Liu, Shuangzhe (University of Canberra, Canberra, Australia); Ma, Tiefeng (Statistics College, Southwestern University of Finance and Economics, Chengdu, China); Polasek, Wolfgang (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria) 
Abstract:  System of panel models are popular models in applied sciences and the question of spatial errors has created the recent demand for spatial system estimation of panel models. Therefore we propose new diagnostic methods to explore if the spatial component will change significantly the outcome of nonspatial estimates of seemingly unrelated regression (SUR) systems. We apply a local sensitivity approach to study the behavior of generalized least squares (GLS) estimators in two spatial autoregression SUR system models: a SAR model with SUR errors (SARSUR) and a SUR model with spatial errors (SURSEM). Using matrix derivative calculus we establish a sensitivity matrix for spatial panel models and we show how a first order Taylor approximation of the GLS estimators can be used to approximate the GLS estimators in spatial SUR models. In a simulation study we demonstrate the good quality of our approximation results. 
Keywords:  Seemingly unrelated regression models, panel systems with spatial errors, SAR and SEM models, generalized leastsquares estimators, Taylor approximations 
JEL:  G14 G15 C22 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:ihs:ihsesp:294&r=ecm 
By:  Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 132441020); Long Liu (The University of Texas at San Antonio) 
Abstract:  This paper considers the problem of estimation and forecasting in a panel data model with random individual effects and AR(p) remainder disturbances. It utilizes a simple exact transformation for the AR(p) time series process derived by Baltagi and Li (1994) and obtains the generalized least squares estimator for this panel model as a least squares regression. This exact transformation is also used in conjunction with Goldberger’s (1962) result to derive an analytic expression for the best linear unbiased predictor. The performance of this predictor is investigated using Monte Carlo experiments and illustrated using an empirical example. Key Words: Prediction; Panel Data; Random Effects; Serial Correlation; AR(p) JEL Classification: C32 
Date:  2012–07 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:138&r=ecm 
By:  Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 132441020); Long Liu (The University of Texas at San Antonio) 
Abstract:  This paper obtains the joint and conditional Lagrange Multiplier tests for a spatial lag regression model with spatial autoregressive error derived in Anselin et al. (1996) using artificial Double Length Regressions (DLR). These DLR tests and their corresponding LM tests are compared using an illustrative example and a Monte Carlo simulation. Key Words: Double Length Regression; Spatial Lag Dependence; Spatial Error Dependence; Artificial Regressions JEL No. C12, C21, R15 
Date:  2012–10 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:147&r=ecm 
By:  Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 132441020); Ying Deng (Syracuse University) 
Abstract:  This paper derives a 3SLS estimator for a simultaneous system of spatial autoregressive equations with random effects, which can therefore handle endoegeneity, spatial lag dependence, heterogeneity as well as cross equation correlation. This is done by utilizing the Kelejian and Prucha (1998) and Lee (2003) type instruments from the crosssection spatial autoregressive literature and marrying them to the error components 3SLS estimator derived by Baltagi (1981) for a system of simultaneous panel data equations. Our Monte Carlo experiments indicate that, for the single equation spatial error components 2SLS estimators, there is a slight gain in efficiency when Lee (2003) type rather than Kelejian and Prucha (1998) instruments are used. However, there is not much difference in efficiency between these instruments for spatial error components 3SLS estimators. Key Words: Panel Data; Spatial Model; Simultaneous Equations; Three Stage Least Squares; JEL No. C13, C33 
Date:  2012–10 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:146&r=ecm 
By:  Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 132441020); Peter H. Egger (Centre for Economic Policy Research (CEPR); London, United Kingdom); Michaela Kesina (ETH Zurich) 
Abstract:  This paper considers a Hausman and Taylor (1981) panel data model that exhibits a Cliff and Ord (1973) spatial error structure. We analyze the small sample properties of a generalized moments estimation approach for that model. This spatial HausmanTaylor estimator allows for endogeneity of the timevarying and timeinvariant variables with the individual effects. For this model, the spatial effects estimator is known to be consistent, but its disadvantage is that it wipes out the effects of timeinvariant variables, which are important for most empirical studies. Monte Carlo results show that the spatial HausmanTaylor estimator performs well in small samples. Key Words: HausmanTaylor estimator; Spatial random effects; Small sample properties JEL No. C23, 31 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:141&r=ecm 
By:  Harvey, A.; Luati, A. 
Abstract:  An unobserved components model in which the signal is buried in noise that is nonGaussian may throw up observations that, when judged by the Gaussian yardstick, are outliers. We describe an observation driven model, based on a conditional Student tdistribution, that is tractable and retains some of the desirable features of the linear Gaussian model. Letting the dynamics be driven by the score of the conditional distribution leads to a specification that is not only easy to implement, but which also facilitates the development of a comprehensive and relatively straightforward theory for the asymptotic distribution of the ML estimator. The methods are illustrated with an application to rail travel in the UK. The .final part of the article shows how the model may be extended to include explanatory variables. 
Keywords:  Outlier; robustness; score; seasonal; tdistribution; trend 
JEL:  C22 
Date:  2012–12–19 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:1255&r=ecm 
By:  Mengmeng Guo; Lhan Zhou; Jianhua Z. Huang; Wolfgang Karl Härdle 
Abstract:  Generalized quantile regressions, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized quantile regressions. Our approach assumes that the generalized quantile regressions share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized quantile regressions usually suffers from large variability due to lack of suffcient data, by borrowing strength across data sets, our joint estimation approach signifcantly improves the estimation effciency, which is demonstrated in a simulation study. The proposed method is applied to data from 150 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. These curves are needed to adjust temperature risk factors so that gaussianity is achieved. The normal distribution of temperature variations is vital for pricing weather derivatives with tools from mathematical finance. 
Keywords:  Asymmetric loss function, Common structure, Functional data analysis, Generalized quantile curve, Iteratively reweighted least squares, Penalization 
JEL:  C13 C23 C38 Q54 
Date:  2012–11 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2013001&r=ecm 
By:  Arthur Lewbel (Boston College); Xun Lu (Hong Kong University of Science and Technology); Liangjun Su (Singapore Management University) 
Abstract:  Consider a nonseparable model Y=R(X,U) where Y and X are observed, while U is unobserved and conditionally independent of X. This paper provides the first nonparametric test of whether R takes the form of a transformation model, meaning that Y is monotonic in the sum of a function of X plus a function of U. Transformation models of this form are commonly assumed in economics, including, e.g., standard specifications of duration models and hedonic pricing models. Our test statistic is asymptotically normal under local alternatives and consistent against nonparametric alternatives. Monte Carlo experiments show that our test performs well in finite samples. We apply our results to test for specifications of generalized accelerated failuretime (GAFT) models of the duration of strikes and of marriages. 
Keywords:  additivity, control variable, endogenous variable, monotonicity, nonparametric nonseparable model, hazard model, specification test, transformation model, unobserved heterogeneity 
JEL:  C12 C14 
Date:  2012–12–24 
URL:  http://d.repec.org/n?u=RePEc:boc:bocoec:817&r=ecm 
By:  Christophe Ley; YvesCaoimhin Swan; Thomas Verdebout 
Abstract:  In this paper we tackle the ANOVA problem for directional data (with particular emphasison geological data) by having recourse to the Le Cam methodology usually reserved for linearmultivariate analysis. We construct locally and asymptotically most stringent parametric testsfor ANOVA for directional data within the class of rotationally symmetric distributions. We turnthese parametric tests into semiparametric ones by (i) using a studentization argument (whichleads to what we call pseudoFvML tests) and by (ii) resorting to the invariance principle (whichleads to e_cient rankbased tests). Within each construction the semiparametric tests inheritoptimality under a given distribution (the FvML distribution in the _rst case, any rotationallysymmetric distribution in the second) from their parametric antecedents and also improve onthe latter by being valid under the whole class of rotationally symmetric distributions. Asymptotic relative e_ciencies are calculated and the _nitesample behavior of the proposed tests isinvestigated by means of a Monte Carlo simulation. We conclude by applying our _ndings on arealdata example involving geological data. 
Keywords:  directional statistics; local asymptotic normality; pseudoFvML tests; rankbased inference; ANOVA 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/134946&r=ecm 
By:  Liao, Yuan; Simoni, Anna 
Abstract:  Bayesian partially identified models have received a growing attention in recent years in the econometric literature, due to their broad applications in empirical studies. Classical Bayesian approach in this literature has been assuming a parametric model, by specifying an adhoc parametric likelihood function. However, econometric models usually only identify a set of moment inequalities, and therefore assuming a known likelihood function suffers from the risk of misspecification, and may result in inconsistent estimations of the identified set. On the other hand, momentcondition based likelihoods such as the limited information and exponential tilted empirical likelihood, though guarantee the consistency, lack of probabilistic interpretations. We propose a semiparametric Bayesian partially identified model, by placing a nonparametric prior on the unknown likelihood function. Our approach thus only requires a set of moment conditions but still possesses a pure Bayesian interpretation. We study the posterior of the support function, which is essential when the object of interest is the identified set. The support function also enables us to construct twosided Bayesian credible sets (BCS) for the identified set. It is found that, while the BCS of the partially identified parameter is too narrow from the frequentist point of view, that of the identified set has asymptotically correct coverage probability in the frequentist sense. Moreover, we establish the posterior consistency for both the structural parameter and its identified set. We also develop the posterior concentration theory for the support function, and prove the semiparametric Bernstein von Mises theorem. Finally, the proposed method is applied to analyze a financial asset pricing problem. 
Keywords:  partial identification; posterior consistency; concentration rate; support function; twosided Bayesian credible sets; identified set; coverage probability; moment inequality models 
JEL:  C14 C11 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:43262&r=ecm 
By:  Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 132441020); Long Liu (The University of Texas at San Antonio) 
Abstract:  This paper modifies the Hausman and Taylor (1981) panel data estimator to allow for serial correlation in the remainder disturbances. It demonstrates the gains in efficiency of this estimator versus the standard panel data estimators that ignore serial correlation using Monte Carlo experiments. 
Keywords:  Panel Data, Fixed Effects, Random Effects, Instrumental Variables, Serial Correlation 
JEL:  C32 
Date:  2012–03 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:136&r=ecm 
By:  Francisco Blasques (VU University Amsterdam) 
Abstract:  This paper proposes a new set of transformed polynomial functions that provide a flexible setting for nonlinear autoregressive modeling of the conditional mean while at the same time ensuring the strict stationarity, ergodicity, fading memory and existence of moments of the implied stochastic sequence. The great flexibility of the transformed polynomial functions makes them interesting for both parametric and seminonparametric autoregressive modeling. This flexibility is established by showing that transformed polynomial sieves are supnormdense on the space of continuous functions and offer appropriate convergence speeds on Holder function spaces. 
Keywords:  timeseries; nonlinear autoregressive models; seminonparametric models; method of sieves. 
JEL:  C01 C13 C14 C22 
Date:  2012–12–05 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20120133&r=ecm 
By:  Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 132441020); Lorenzo Trapani; Giovanni Urga 
Abstract:  We propose a test for the stability over time of the covariance matrix of multivariate time series. The analysis is extended to the eigensystem to ascertain changes due to instability in the eigenvalues and/or eigenvectors. Using strong Invariance Principle and Law of Large Numbers, we normalize the CUSUMtype statistics to calculate their supremum over the whole sample. The power properties of the test versus local alternatives and alternatives close to the beginning/end of sample are investigated theoretically and via simulation. The testing procedure is validated through an application to 18 US interest rates over 19972011, finding instability at the end2007/beginning2008. Key Words: Covariance Matrix, Eigensystem, Changepoint, Term Structure of Interest Rates, CUSUM statistic JEL No. C1, C22, C5 
Date:  2012–04 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:131&r=ecm 
By:  Jan F. KIVIET (Division of Economics, Nanyang Technological University, Singapore 637332, Singapore); Milan PLEUS (Department of Quantitative Economics, Amsterdam School of Economics, University of Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam, The Netherlands) 
Abstract:  Tests for classi?cation as endogenous or predetermined of arbitrary subsets of regressors are formulated as signi?cance tests in auxiliary IV regressions and their relationships with various more classic test procedures are examined. Simulation experiments are designed by solving the data generating process parameters from salient econometric features, namely: degree of simultaneity and multicollinearity of regressors, and individual and joint strength of external instrumental variables. Thus, for various test implementations, a wide class of relevant cases is scanned for ?aws in performance regarding type I and II errors. Substantial size distortions occur, but these can be cured remarkably well through bootstrapping, except when instruments are weak. The power of the subset tests is such that they establish an essential addition to the wellknown classic fullset DWH tests in a data based classi?cation of individual explanatory variables. 
Keywords:  bootstrapping, classi?cation of explanatories, DWH orthogonality tests, test implementation, test performance, simulation design 
JEL:  C01 C12 C15 C30 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:nan:wpaper:1208&r=ecm 
By:  Arun Chandrasekhar; Victor Chernozhukov (Institute for Fiscal Studies and MIT); Francesca Molinari (Institute for Fiscal Studies and Cornell University); Paul Schrimpf 
Abstract:  This paper provides inference methods for best linear approximations to functions which are known to lie within a band. It extends the partial identification literature by allowing the upper and lower functions defining the band to be any functions, including ones carrying an index, which can be estimated parametrically or nonparametrically. The identification region of the parameters of the best linear approximation is characterised via its support function, and limit theory is developed for the latter. We prove that the support function approximately converges to a Gaussian process, and validity of the Bayesian bootstrap is established. The paper nests as special cases the canonical examples in the literature: mean regression with interval valued outcome data and interval valued regressor data. Because the bounds may carry an index, the paper covers problems beyond mean regression; the framework is extremely versatile. Applications include quantile and distribution regression with interval valued data, sample selection problems, as well as mean, quantile and distribution treatment effects. Moreover, the framework can account for the availability of instruments. An application is carried out, studying female labor force participation along the lines of Mulligan and Rubinstein (2008). 
Keywords:  Set identified function; best linear approximation; partial identification; support function; bayesian bootstrap; convex set 
JEL:  C13 C31 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:43/12&r=ecm 
By:  Prosper Dovonon; Éric Renault 
Abstract:  This paper proposes a test for common conditionally heteroskedastic (CH) features in asset returns. Following Engle and Kozicki (1993), the common CH features property is expressed in terms of testable overidentifying moment restrictions. However, as we show, these moment conditions have a degenerate Jacobian matrix at the true parameter value and therefore the standard asymptotic results of Hansen (1982) do not apply. We show in this context that the Hansen’s (1982) Jtest statistic is asymptotically distributed as the minimum of the limit of a certain empirical process with a markedly nonstandard distribution. If two assets are considered, this asymptotic distribution is a halfhalf mixture of x_(H1)^2and x_H^2, where H is the number of moment conditions, as opposed to a x_(H1)^2. With more than two assets, this distribution lies between the x_(Hp)^2 and x_H^2 (p, the number of parameters). These results show that ignoring the lack of first order identification of the moment condition model leads to oversized tests with possibly increasing overrejection rate with the number of assets. A Monte Carlo study illustrates these findings. <P>Cet article propose un test pour la détection de caractéristiques communes d’hétéroscédasticité conditionnelle (HC) dans des rendements d’actifs financiers. Conformément à Engle et Kozicki (1993), l’existence de caractéristiques communes HC est exprimée en termes de conditions de moment suridentifiantes testables. Cependant, nous montrons que ces conditions de moment ne sont pas localement linéairement indépendantes; la matrice Jacobienne est nulle à la vraie valeur des paramètres et, par conséquent, la théorie asymptotique de Hansen (1982) ne s’applique pas. Nous montrons dans ce contexte que la statistique de Jtest de Hansen (1982) est distribuée asymptotiquement comme le minimum de la limite d’un processus empirique avec une distribution non standard. Quand on considère deux actifs, cette distribution asymptotique est un mélange à parts égales de x_(H1)^2 et x_H^2, où H est le nombre de conditions de moment, par opposition à x_(H1)^2. Avec plus de deux actifs, cette distribution est comprise entre x_(Hp)^2 et x_H^2 (p, le nombre de paramètres). Ces résultats montrent que l’ignorance du défaut d’identification au premier ordre dans ce modèle de conditions de moments conduit à des tests qui rejettent trop souvent l’hypothèse nulle, le degré de surrejet étant croissant avec le nombre d’actifs. Une étude de MonteCarlo illustre ces résultats. 
Keywords:  Common features, GARCH factors, Nonstandard asymptotics, GMM, GMM overidentification test, identification, first order identification, 
Date:  2012–12–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2012s34&r=ecm 
By:  Huber, Martin; Lechner, Michael; Steinmayr, Andreas 
Abstract:  Using a simulation design that is based on empirical data, a recent study by Huber, Lechner and Wunsch (2012) finds that distanceweighted radius matching with bias adjustment as proposed in Lechner, Miquel and Wunsch (2011) is competitive among a broad range of propensity scorebased estimators used to correct for mean differences due to observable covariates. In this paper, we further investigate the finite sample behaviour of radius matching with respect to various tuning parameters. The results are intended to help the practitioner to choose suitable values of these parameters when using this method, which has been implemented as "radiusmatch" command in the software packages GAUSS, STATA and the R package "radiusmatching". 
Keywords:  Propensity score matching, radius matching, selection on observables, empirical Monte Carlo study, finite sample properties 
JEL:  C21 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:usg:econwp:2012:26&r=ecm 
By:  Marcel Aloy (AixMarseille University (AixMarseille School of Economics), CNRS & EHESS); Gilles Dufrénot (AixMarseille University (AixMarseille School of Economics), CNRS & EHESS, Banque de France and CEPII); Charles Lai Tong (AixMarseille University (AixMarseille School of Economics), CNRS & EHESS); Anne PéguinFeissolle (AixMarseille University (AixMarseille School of Economics), CNRS & EHESS) 
Abstract:  This paper proposes a new fractional model with a timevarying longmemory parameter. The latter evolves nonlinearly according to a transition variable through a logistic function. We present a LRbased test that allows to discriminate between the standard fractional model and our model. We further apply the nonlinear least squares method to estimate the long memory parameter. We present an application to the unemployment rate in the UnitedStates from 1948 to 2012. 
Keywords:  Longmemory, nonlinearity, time varying parameter, logistic. 
JEL:  C22 C51 C58 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:aim:wpaimx:1240&r=ecm 
By:  Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 132441020); Lorenzo Trapani (Cass Business School); Giovanni Urga (Cass Business School, City University London) 
Abstract:  We investigate the issue of testing for structural breaks in large cointegrated panels with common and idiosyncratic regressors. We prove a panel Functional Central Limit Theorem. We show that the estimated coefficients of the common regressors have a mixed normal distribution, whilst the estimated coefficients of the idiosyncratic regressors have a normal distribution. We consider strong dependence across the idiosyncratic regressors by allowing for the presence of (stationary and nonstationary) common factors. We show that tests based on transformations of Waldtype statistics have power versus alternatives of order 
Keywords:  Structural change, Panel cointegration, Common trends JEL codes: C23 
Date:  2012–02 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:135&r=ecm 
By:  Dominique Guegan (Centre d'Economie de la Sorbonne  Paris School of Economics); Bertrand K. Hassani (Santander UK et Centre d'Economie de la Sorbonne) 
Abstract:  The Advanced Measurement Approach requires financial institutions to develop internal models to evaluate regulatory capital. Traditionally, the Loss Distribution Approach (LDA) is used mixing frequencies and severities to build a Loss Distribution Function (LDF). This distribution represents annual losses, consequently the 99.9 percentile of the distribution providing the capital charge denotes the worst year in a thousand. The traditional approach approved by the regulator implemented by financial institutions assumes the independence of the losses. This paper proposes a solution to address the issues arising when autocorrelations are detected between the losses. Our approach suggests working with the losses considered as time series. Thus, the losses are aggregated periodically and several models are adjusted on the related time series among AR, ARFI and Gegenbauer processes, and a distribution is fitted on the residuals. Finally a Monte Carlo simulation enables constructing the LDF, and the pertaining risk measures are evaluated. In order to show the impact of internal models retained by financial institutions on the capital charges, the paper draws a parallel between the static traditional approach and an appropriate dynamical modelling. If by implementing the traditional LDA, no particular distribution proves its adequacy to the data  as soon as the goodnessoffit tests reject them  keeping the LDA corresponds to an arbitrary choice. This paper suggests an alternative and robust approach. For instance, for the two data sets explored in this paper, with the introduced time series strategies, the independence assumption is released and the autocorrelations embedded within the losses are captured. The construction of the related LDF enables the computation of the capital charges and therefore permits to comply with the regulation taking into account at the same time the large losses with adequate distributions on the residuals, and the correlations between the losses with the time series processes. 
Keywords:  Operational risk, time series, Gegenbauer processes, Monte Carlo, risk measures. 
JEL:  C18 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:mse:cesdoc:12091&r=ecm 
By:  Ginters Buss 
Abstract:  The paper studies regularised direct filter approach as a tool for highdimensional filtering and realtime signal extraction. It is shown that the regularised filter is able to process highdimensional data sets by controlling for effective degrees of freedom and that it is computationally fast. The paper illustrates the features of the filter by tracking the mediumtolongrun component in GDP growth for the euro area, including replication of Eurocointype behavior as well as producing more timely indicators. A further robustness check is performed on a less homogeneous dataset for Latvia. The resulting realtime indicators are found to track economic activity in a timely and robust manner. The regularised direct filter approach can thus be considered a promising tool for both concurrent estimation and forecasting using highdimensional datasets and a decent alternative to the dynamic factor methodology. 
Keywords:  highdimensional filtering, realtime estimation, coincident indicator, leading indicator, parameter shrinkage, business cycles, dynamic factor model 
JEL:  C13 C32 E32 E37 
Date:  2012–12–27 
URL:  http://d.repec.org/n?u=RePEc:ltv:wpaper:201206&r=ecm 
By:  Gordon J. Ross 
Abstract:  The volatility of financial instruments is rarely constant, and usually varies over time. This creates a phenomenon called volatility clustering, where large price movements on one day are followed by similarly large movements on successive days, creating temporal clusters. The GARCH model, which treats volatility as a drift process, is commonly used to capture this behavior. However research suggests that volatility is often better described by a structural break model, where the volatility undergoes abrupt jumps in addition to drift. Most efforts to integrate these jumps into the GARCH methodology have resulted in models which are either very computationally demanding, or which make problematic assumptions about the distribution of the instruments, often assuming that they are Gaussian. We present a new approach which uses ideas from nonparametric statistics to identify structural break points without making such distributional assumptions, and then models drift separately within each identified regime. Using our method, we investigate the volatility of several major stock indexes, and find that our approach can potentially give an improved fit compared to more commonly used techniques. 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1212.6016&r=ecm 
By:  Enrique MoralBenito (Banco de España) 
Abstract:  This paper considers panel growth regressions in the presence of model uncertainty and reverse causality concerns. For this purpose, my econometric framework combines Bayesian Model Averaging with a suitable likelihood function for dynamic panel models with weakly exogenous regressors and fixed effects. An application of this econometric methodology to a panel of countries over the 19602000 period indicates that there is no robust determinant of economic growth and that the rate of conditional convergence is indistinguishable from zero. 
Keywords:  growth regressions, panel data, model uncertainty, bayesian model averaging 
JEL:  O40 C23 C11 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:bde:wpaper:1243&r=ecm 
By:  Politis, Dimitris 
Abstract:  The asymptotic behavior of nonparametric estimators of the probability density function of an i.i.d. sample and of the spectral density function of a stationary time series have been studied in some detail in the last 5060 years. Nevertheless, an open problem remains to date, namely the behavior of the estimator when the target function happens to vanish at the point of interest. In the paper at hand we fill this gap, and show that asymptotic normality still holds true but with a superefficient rate of convergence. We also provide two possible applications where these new results can be found useful in practice. 
Keywords:  Econometrics and Quantitative Economics, Nonparametric Density 
Date:  2012–12–01 
URL:  http://d.repec.org/n?u=RePEc:cdl:ucsdec:qt40g0z0tz&r=ecm 
By:  Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 132441020); Georges Bresson (Université PanthéonAssas (Paris II)) 
Abstract:  This paper suggests a robust Hausman and Taylor (1981) estimator, hereafter HT, that deals with the possible presence of outliers. This entails two modifications of the classical HT estimator. The first modification uses the Bramati and Croux (2007) robust Within MS estimator instead of the Within estimator in the first stage of the HT estimator. The second modification uses the robust Wagenvoort and Waldmann (2002) two stage generalized MS estimator instead of the 2SLS estimator in the second step of the HT estimator. Monte Carlo simulations show that, in the presence of vertical outliers or bad leverage points, the robust HT estimator yields large gains in MSE as compared to its classical HausmanTaylor counterpart. We illustrate this robust version of the HausmanTaylor estimator using an empirical application. Key Words: Bad leverage points, HausmanTaylor, panel data, two stage generalized MS estimator, vertical outliers. JEL No. C23, C26 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:140&r=ecm 
By:  Raffaello Morales; T. Di Matteo; Tomaso Aste 
Abstract:  We report evidence that empirical data show time varying multifractal properties. This is obtained by comparing empirical observations of the weighted generalised Hurst exponent (wGHE) with time series simulated via Multifractal Random Walk (MRW) by Bacry \textit{et al.} [\textit{E.Bacry, J.Delour and J.Muzy, Phys.Rev.E \,{\bf 64} 026103, 2001}]. While dynamical wGHE computed on synthetic MRW series is consistent with a scenario where multifractality is constant over time, fluctuations in the dynamical wGHE observed in empirical data fail to be in agreement with a MRW with constant intermittency parameter. This is a strong argument to claim that observed variations of multifractality in financial time series are to be ascribed to a structural breakdown in the temporal covariance structure of stock returns series. As a consequence, multi fractal models with a constant intermittency parameter may not always be satisfactory in reproducing financial market behaviour. 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1212.3195&r=ecm 
By:  Andrzej Kociecki (National Bank of Poland); Michał Rubaszek (National Bank of Poland; Warsaw School of Economics); Michele Ca' Zorzi (European Central Bank) 
Abstract:  The paper provides a novel Bayesian methodological framework to estimate structural VAR (SVAR) models with recursive identification schemes that allows for the inclusion of overidentifying restrictions. The proposed framework enables the researcher to (i) elicit the prior on the nonzero contemporaneous relations between economic variables and to (ii) derive an analytical expression for the posterior distribution and marginal data density. We illustrate our methodological framework by estimating a backward looking NewKeynesian model taking into account prior beliefs about the contemporaneous coefficients in the Phillips curve and Taylor rule. JEL Classification: C11, C32, E47 
Keywords:  Structural VAR, Bayesian inference, overidentifying restrictions 
Date:  2012–11 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20121492&r=ecm 
By:  ARI, YAKUP 
Abstract:  Nonlinearity is the general characteristic of financial series. Thus, common nonlinear models such as GARCH, EGARCH and TGARCH are used to obtain the volatility of data. in addition , continuous time GARCH (COGARCH) model that is the extansion and analogue of the discrete time GARCH process, is the new approach for volatility and derivative pricing. COGARCH has a single source variability like GARCH, but also it is constructed on driving Levy Process since increments of Levy Process is replaced with the innovations in the discrete time. in this study, the proper model for the volatility is shown to represent foreign exchange rate of USD versus TRY for different period of time from January 2009 to December 2011. 
Keywords:  Volatility; Levy Process;GARCH; EGARCH; TGARCH;COGARCH; 
JEL:  C01 
Date:  2012–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:43330&r=ecm 
By:  Ilaria Lucrezia Amerise; Agostino Tarsitano (Dipartimento di Economia e Statistica, Università della Calabria) 
Abstract:  In a number of applications of multivariate analysis, the data matrix is not fully observed. Instead a set of distance matrices on the same entities is available. A reasonable strategy to construct a global distance matrix is to compute a weighted average of the partial distance matrices, provided that an appropriate system of weights can be defined. The Distatis method developed by Abdi et al. (2005) is a threestep procedure for computing the global distance matrix. An important aspect of that procedure is the computation of the vector correlation coefficient (RV) to measure the similarity between partial distance matrices. The RV coefficient is based on the Pearson product moment correlation coeffcient, which is highly prone to the effects of outliers. We are convinced that, in many measurable phenomena, the relationships between distances are far more likely to be ordinal than interval in nature, and it is therefore preferable to adopt an approach appropriate to ordinal data. The goal of our paper is to revise the system of weights of the Distatis procedure substituting the conventional Pearson coefficient with rank correlations that are less affected by errors of measurement, perturbation or presence of outliers in the data. In the light of our findings on real and simulated data sets, we recommend the use of a specic coefficient of rank correlation to replace, where necessary, the conventional vector correlation. 
Keywords:  Distatis, Ordinal data, Vector rank correlation 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:clb:wpaper:201209&r=ecm 
By:  Tucker McElroy; Michael W. McCracken 
Abstract:  This paper develops the theory of multistep ahead forecasting for vector time series that exhibit temporal nonstationarity and cointegration. We treat the case of a semiinfinite past, developing the forecast filters and the forecast error filters explicitly, and also provide formulas for forecasting from a finitesample of data. This latter application can be accomplished by the use of large matrices, which remains practicable when the total sample size is moderate. Expressions for Mean Square Error of forecasts are also derived, and can be implemented readily. Three diverse data applications illustrate the flexibility and generality of these formulas: forecasting Euro Area macroeconomic aggregates; backcasting fertility rates by racial category; and forecasting regional housing starts using a seasonally cointegrated model. 
Keywords:  Econometric models ; Economic forecasting 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2012060&r=ecm 
By:  Grech Dariusz; Mazur Zygmunt 
Abstract:  We extend our previous study of scaling range properties done for detrended fluctuation analysis (DFA) \cite{former_paper} to other techniques of fluctuation analysis (FA). The new technique called Modified Detrended Moving Average Analysis (MDMA) is introduced and its scaling range properties are examined and compared with those of detrended moving average analysis (DMA) and DFA. It is shown that contrary to DFA, DMA and MDMA techniques exhibit power law dependence of the scaling range with respect to the length of the searched signal and with respect to the accuracy $R^2$ of the fit to the considered scaling law imposed by DMA or MDMA schemes. This power law dependence is satisfied for both uncorrelated and autocorrelated data. We find also a simple generalization of this power law relation for series with different level of autocorrelations measured in terms of the Hurst exponent. Basic relations between scaling ranges for different techniques are also discussed. Our findings should be particularly useful for local FA in e.g., econophysics, finances or physiology, where the huge number of short time series has to be examined at once and wherever the preliminary check of the scaling range regime for each of the series separately is neither effective nor possible. 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1212.5070&r=ecm 
By:  Makram ElShagi; Gregor von Schweinitz 
Abstract:  Due to the recent financial crisis, the interest in econometric models that allow to incorporate binary variables (such as the occurrence of a crisis) experienced a huge surge. This paper evaluates the performance of the Qual VAR, i.e. a VAR model including a latent variable that governs the behavior of an observable binary variable. While we find that the Qual VAR performs reasonably well in forecasting (outperforming a probit benchmark), there are substantial identification problems. Therefore, when the economic interpretation of the dynamic behavior of the latent variable and the chain of causality matter, the Qual VAR is inadvisable. 
Keywords:  binary choice model, Gibbs sampling, latent variable, MCMC, method evaluation 
JEL:  C15 C35 E37 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:iwh:dispap:1212&r=ecm 
By:  Sean C. Kerman (Department of Economics, Brigham Young University); James B. McDonald (Department of Economics, Brigham Young University) 
Abstract:  Bounds for the skewness kurtosis space corresponding to the skewed generalized T, skewed generalized error, skewed T, and some other distributions are presented and contrasted with the bounds reported by Klaassen et al.(2000) for unimodal probability density functions. The skewed generalized T and skewed generalized error distributions have the greatest flexibility of the distributions considered. 
Keywords:  Skewed generalized T; Kurtosis; Skewness 
JEL:  C50 
Date:  2012–11 
URL:  http://d.repec.org/n?u=RePEc:byu:byumcl:201210&r=ecm 
By:  Andrea Stella; James H. Stock 
Abstract:  We develop a parsimonious bivariate model of inflation and unemployment that allows for persistent variation in trend inflation and the NAIRU. The model, which consists of five unobserved components (including the trends) with stochastic volatility, implies a timevarying VAR for changes in the rates of inflation and unemployment. The implied backwardslooking Phillips curve has a timevarying slope that is steeper in the 1970s than in the 1990s. Pseudo outofsample forecasting experiments indicate improvements upon univariate benchmarks. Since 2008, the implied Phillips curve has become steeper and the NAIRU has increased. 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:1062&r=ecm 
By:  Christopher Heiberger (University of Augsburg, Department of Economics); Torben Klarl (University of Augsburg, Department of Economics); Alfred Maussner (University of Augsburg, Department of Economics) 
Abstract:  Many algorithms that provide approximate solutions for dynamic stochastic general equilibrium (DSGE) models employ the generalized Schur factorization since it allows for a flexible formulation of the model and exempts the researcher from identifying equations that give raise to infinite eigenvalues. We show, by means of an example, that the policy functions obtained by this approach may differ from those obtained from the solution of a properly reduced system. As a consequence, simulation results may depend on the numeric values of parameters that are theoretically irrelevant. The source of this inaccuracy are illconditioned matrices as they emerge, e.g., in models with strong habits. Therefore, researchers should always crosscheck their results and test the accuracy of the solution. 
Keywords:  DSGE Models, Schur Factorization, System Reduction, Accuracy of Solutions 
JEL:  C32 C63 E37 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:aug:augsbe:0320&r=ecm 