
on Econometrics 
By:  Christopher F Baum (Boston College); Mark E. Schaffer (HeriotWatt University); Steven Stillman (Motu Economic and Public Policy Research) 
Abstract:  We extend our 2003 paper on instrumental variables (IV) and GMM estimation and testing and describe enhanced routines that address HAC standard errors, weak instruments, LIML and kclass estimation, tests for endogeneity and RESET and autocorrelation tests for IV estimates. 
Keywords:  instrumental variables, weak instruments, generalized method of moments, endogeneity, heteroskedasticity, serial correlation, HAC standard errors, LIML, CUE, overidentifying restrictions, FrischWaughLovell theorem, RESET, CumbyHuizinga test 
JEL:  C20 C22 C23 C12 C13 C87 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:hwe:certdp:0706&r=ecm 
By:  PéguinFeissolle, Anne (GREQAM); Strikholm, Birgit (Dept. of Economic Statistics, Stockholm School of Economics); Teräsvirta, Timo (CREATES, School of Economics and Management) 
Abstract:  In this paper we propose a general method for testing the Granger noncausality hypothesis in stationary nonlinear models of unknown functional form. These tests are based on a Taylor expansion of the nonlinear model around a given point in a sample space. We study the performance of our tests by a Monte Carlo experiment and compare these to the most widely used linear test. Our tests appear to be wellsized and have reasonably good power properties. 
Keywords:  Hypothesis testing; causality 
JEL:  C22 C51 
Date:  2007–08–27 
URL:  http://d.repec.org/n?u=RePEc:hhs:hastef:0672&r=ecm 
By:  Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Department of Economics, UCLA) 
Abstract:  This paper analyzes the properties of subsampling, hybrid subsampling, and sizecorrection methods in two nonregular models. The latter two procedures are introduced in Andrews and Guggenberger (2005b). The models are nonregular in the sense that the test statistics of interest exhibit a discontinuity in their limit distribution as a function of a parameter in the model. The first model is a linear instrumental variables (IV) model with possibly weak IVs estimated using twostage least squares (2SLS). In this case, the discontinuity occurs when the concentration parameter is zero. The second model is a linear regression model in which the parameter of interest may be near a boundary. In this case, the discontinuity occurs when the parameter is on the boundary. The paper shows that in the IV model onesided and equaltailed twosided subsampling tests and confidence intervals (CIs) based on the 2SLS t statistic do not have correct asymptotic size. This holds for both fully and partiallystudentized t statistics. But, subsampling procedures based on the partiallystudentized t statistic can be sizecorrected. On the other hand, symmetric twosided subsampling tests and CIs are shown to have (essentially) correct asymptotic size when based on a partiallystudentized t statistic. Furthermore, all types of hybrid subsampling tests and CIs are shown to have correct asymptotic size in this model. The above results are consistent with "impossibility" results of Dufour (1997) because subsampling and hybrid subsampling CIs are shown to have infinite length with positive probability. Subsampling CIs for a parameter that may be near a lower boundary are shown to have incorrect asymptotic size for upper onesided and equaltailed and symmetric twosided CIs. Again, sizecorrection is possible. In this model as well, all types of hybrid subsampling CIs are found to have correct asymptotic size. 
Keywords:  Asymptotic size, Finitesample size, Hybrid test, Instrumental variable, Overrejection, Parameter near boundary, Size correction, Subsampling confidence interval, Subsampling test, Weak instrument 
JEL:  C12 C15 
Date:  2007–05 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1608&r=ecm 
By:  Perez, Marcos; Ahn, Seung Chan 
Abstract:  We propose a generalized method of moment (GMM) estimator of the number of latent factors in linear factor models. The method is appropriate for panels a large (small) number of crosssection observations and a small (large) number of timeseries observations. It is robust to heteroskedasticity and time series autocorrelation of the idiosyncratic components. All necessary procedures are similar to three stage least squares, so they are computationally easy to use. In addition, the method can be used to determine what observable variables are correlated with the latent factors without estimating them. Our Monte Carlo experiments show that the proposed estimator has good finitesample properties. As an application of the method, we estimate the number of factors in the US stock market. Our results indicate that the US stock returns are explained by three factors. One of the three latent factors is not captured by the factors proposed by Chen Roll and Ross 1986 and Fama and French 1996. 
Keywords:  Factor models; GMM; number of factors; asset pricing 
JEL:  C10 G12 C13 C33 
Date:  2007–09–09 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:4862&r=ecm 
By:  Kyungchul Song (Department of Economics, University of Pennsylvania) 
Abstract:  This paper investigates the problem of testing conditional independence of Y and Z given λθ(X) for some unknown θ ∈ Θ ⊂ Rd, for a parametric function λθ(·). For instance, such a problem is relevant in recent literatures of heterogeneous treatment effects and contract theory. First, this paper finds that using Rosenblatt transforms in a certain way, we can construct a class of tests that are asymptotically pivotal and asymptotically unbiased against √nconverging Pitman local alternatives. The asymptotic pivotalness is convenient especially because the asymptotic critical values remain invariant over different estimators of the unknown parameter θ. Even when tests are asymptotically pivotal, however, it is often the case that simulation methods to obtain asymptotic critical values are yet unavailable or complicated, and hence this paper suggests a simple wild bootstrap procedure. A special case of the proposed testing framework is to test the presence of quantile treatment effects in a program evaluation data set. Using the JTPA training data set, we investigate the validity of nonexperimental procedures for inferences about quantile treatment effects of the job training program. 
Keywords:  Conditional independence, asymptotic pivotal tests, Rosenblatt transforms, wild bootstrap 
JEL:  C12 C14 C52 
Date:  2007–09–05 
URL:  http://d.repec.org/n?u=RePEc:pen:papers:07026&r=ecm 
By:  Xavier Gabaix; Rustam Ibragimov 
Abstract:  Despite the availability of more sophisticated methods, a popular way to estimate a Pareto exponent is still to run an OLS regression: log(Rank)=ab log(Size), and take b as an estimate of the Pareto exponent. The reason for this popularity is arguably the simplicity and robustness of this method. Unfortunately, this procedure is strongly biased in small samples. We provide a simple practical remedy for this bias, and propose that, if one wants to use an OLS regression, one should use the Rank1/2, and run log(Rank1/2)=ab log(Size). The shift of 1/2 is optimal, and reduces the bias to a leading order. The standard error on the Pareto exponent zeta is not the OLS standard error, but is asymptotically (2/n)^(1/2) zeta. Numerical results demonstrate the advantage of the proposed approach over the standard OLS estimation procedures and indicate that it performs well under dependent heavytailed processes exhibiting deviations from power laws. The estimation procedures considered are illustrated using an empirical application to Zipf's law for the U.S. city size distribution. 
JEL:  C13 
Date:  2007–09 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberte:0342&r=ecm 
By:  Lind, Jo Thori; Mehlum, Halvor 
Abstract:  Nonlinear relationships are common in economic theory, and such relationships are also frequently tested empirically. We argue that the usual test of nonlinear relationships is flawed, and derive the appropriate test for a U shaped relationship. Our test gives the exact necessary and sufficient conditions for the test of a U shape in both finite samples and for a large class of models. 
Keywords:  U shape; hypothesis test; Kuznets curve; Fieller interval 
JEL:  C12 C20 
Date:  2007–09–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:4823&r=ecm 
By:  Markus Frölich 
Abstract:  In this paper, the regression discontinuity design (RDD) is generalized to account for differences in observed covariates X in a fully nonparametric way. It is shown that the treatment effect can be estimated at the rate for onedimensional nonparametric regression irrespective of the dimension of X. It thus extends the analysis of Hahn, Todd and van der Klaauw (2001) and Porter (2003), who examined identification and estimation without covariates, requiring assumptions that may often be too strong in applications. In many applications, individuals to the left and right of the threshold differ in observed characteristics. Houses may be constructed in different ways across school attendance district boundaries. Firms may differ around a threshold that implies certain legal changes, etc. Accounting for these differences in covariates is important to reduce bias. In addition, accounting for covariates may also reduces variance. Finally, estimation of quantile treatment effects (QTE) is also considered. 
Keywords:  Treatment effect, causal effect, complier, LATE, nonparametric regression, endogeneity 
JEL:  C13 C14 C21 
Date:  2007–08 
URL:  http://d.repec.org/n?u=RePEc:usg:dp2007:200732&r=ecm 
By:  Andreas Beyer; Roger E. A. Farmer; Jérôme Henry; Massimiliano Marcellino 
Abstract:  DSGE models are characterized by the presence of expectations as explanatory variables. To use these models for policy evaluation, the econometrician must estimate the parameters of expectation terms. Standard estimation methods have several drawbacks, including possible lack or weakness of identification of the parameters, misspecification of the model due to omitted variables or parameter instability, and the common use of inefficient estimation methods. Several authors have raised concerns over the implications of using inappropriate instruments to achieve identification. In this paper we analyze the practical relevance of these problems and we propose to combine factor analysis for information extraction from large data sets and GMM to estimate the parameters of systems of forward looking equations. Using these techniques, we evaluate the robustness of recent findings on the importance of forward looking components in the equations of a standard NewKeynesian model. 
JEL:  E5 E52 E58 
Date:  2007–09 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:13404&r=ecm 
By:  Stéphane Loisel (SAF  EA2429  Laboratoire de Science Actuarielle et Financière  [Université Claude Bernard  Lyon I]); Christian Mazza (Département de Mathématiques  [Université de Fribourg]); Didier Rullière (SAF  EA2429  Laboratoire de Science Actuarielle et Financière  [Université Claude Bernard  Lyon I]) 
Abstract:  The classical risk model is considered and a sensitivity analysis of finitetime ruin probabilities is carried out. We prove the weak convergence of a sequence of empirical finitetime ruin probabilities. Socalled partly shifted risk processes are introduced, and used to derive an explicit expression of the asymptotic variance of the considered estimator. This provides a clear representation of the influence function associated with finite time ruin probabilities, giving a useful tool to quantify estimation risk according to new regulations. 
Keywords:  Finitetime ruin probability; robustness; Solvency II; reliable ruin probability; asymptotic normality; influence function; partly shifted risk process; Estimation Risk Solvency Margin. (ERSM). 
Date:  2007–08–29 
URL:  http://d.repec.org/n?u=RePEc:hal:papers:hal00168716_v1&r=ecm 
By:  Xiaohong Chen (Cowles Foundation, Yale University); Markus Reiss (University of Heidelberg) 
Abstract:  In this paper, we clarify the relations between the existing sets of regularity conditions for convergence rates of nonparametric indirect regression (NPIR) and nonparametric instrumental variables (NPIV) regression models. We establish minimax risk lower bounds in mean integrated squared error loss for the NPIR and the NPIV models under two basic regularity conditions that allow for both mildly illposed and severely illposed cases. We show that both a simple projection estimator for the NPIR model, and a sieve minimum distance estimator for the NPIV model, can achieve the minimax risk lower bounds, and are rateoptimal uniformly over a large class of structure functions, allowing for mildly illposed and severely illposed cases. 
Keywords:  Nonparametric instrumental regression, Nonparametric indirect regression, Statistical illposed inverse problems, Minimax risk lower bound, Optimal rate 
JEL:  C14 C30 
Date:  2007–09 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1626&r=ecm 
By:  Erik Hjalmarsson; PÃ¤r Ã–sterholm 
Abstract:  We investigate the properties of Johansen's (1988, 1991) maximum eigenvalue and trace tests for cointegration under the empirically relevant situation of nearintegrated variables. Using Monte Carlo techniques, we show that in a system with nearintegrated variables, the probability of reaching an erroneous conclusion regarding the cointegrating rank of the system is generally substantially higher than the nominal size. The risk of concluding that completely unrelated series are cointegrated is therefore nonnegligible. The spurious rejection rate can be reduced by performing additional tests of restrictions on the cointegrating vector(s), although it is still substantially larger than the nominal size. 
Date:  2007–06–22 
URL:  http://d.repec.org/n?u=RePEc:imf:imfwpa:07/141&r=ecm 
By:  Johansson, Fredrik (Department of Economics) 
Abstract:  When a survey response mechanism depends on the variable of interest measured within the same survey and observed for only part of the sample, the situation is one of nonignorable nonresponse. Ignoring the nonresponse is likely to generate significant bias in the estimates. To solve this, one option is the joint modelling of the response mechanism and the variable of interest. Another option is to calibrate each observation with weights constructed from auxiliary data. In an application where earnings equations are estimated these approaches are compared to reference estimates based on large a Swedish register based data set without nonresponse. 
Keywords:  Earning equations; Nonignorable response mechanism; Calibration; Selection; Fullinformation maximum likelihood 
JEL:  C15 C24 C34 C42 J31 
Date:  2007–08–22 
URL:  http://d.repec.org/n?u=RePEc:hhs:uunewp:2007_022&r=ecm 
By:  Agustín Maravall (Banco de España); Ana del Río (Banco de España) 
Abstract:  Maravall and del Río (2001), analized the time aggregation properties of the HodrickPrescott (HP) filter, which decomposes a time series into trend and cycle, for the case of annual, quarterly, and monthly data, and showed that aggregation of the disaggregate component cannot be obtained as the exact result from direct application of an HP filter to the aggregate series. The present paper shows how, using several criteria, one can find HP decompositions for different levels of aggregation that provide similar results. We use as the main criterion for aggregation the preservation of the period associated with the frequency for which the filter gain is ½; this criterion is intuitive and easy to apply. It is shown that the Ravn and Uhlig (2002) empirical rule turns out to be a firstorder approximation to our criterion, and that alternative —more complex— criteria yield similar results. Moreover, the values of the parameter λ of the HP filter, that provide results that are approximately consistent under aggregation, are considerably robust with respect to the ARIMA model of the series. Aggregation is seen to work better for the case of temporal aggregation than for systematic sampling. Still a word of caution is made concerning the desirability of exact aggregation consistency. The paper concludes with a clarification having to do with the questionable spuriousness of the cycles obtained with HP filter. 
Keywords:  Time series, Filtering and Smoothing, Time aggregation, Trend estimation, Business cycles, ARIMA models 
JEL:  C22 C43 C82 E32 E66 
Date:  2007–09 
URL:  http://d.repec.org/n?u=RePEc:bde:wpaper:0728&r=ecm 
By:  Siem, A.Y.D.; Hertog, D. den (Tilburg University, Center for Economic Research) 
Abstract:  In the field of the Design and Analysis of Computer Experiments (DACE) metamodels are used to approximate timeconsuming simulations. These simulations often contain simulationmodel errors in the output variables. In the construction of metamodels, these errors are often ignored. Simulationmodel errors may be magnified by the metamodel. Therefore, in this paper, we study the construction of Kriging models that are robust with respect to simulationmodel errors. We introduce a robustness criterion, to quantify the robustness of a Kriging model. Based on this robustness criterion, two new methods to find robust Kriging models are introduced. We illustrate these methods with the approximation of the Sixhump camel back function and a real life example. Furthermore, we validate the two methods by simulating artificial perturbations. Finally, we consider the influence of the Design of Computer Experiments (DoCE) on the robustness of Kriging models. 
Keywords:  Kriging;robustness;simulationmodel error 
JEL:  C60 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200768&r=ecm 
By:  Ivan Jeliazkov (Department of Economics, University of CaliforniaIrvine); Dale J. Poirier (Department of Economics, University of CaliforniaIrvine) 
Abstract:  This paper analyzes the daily incidence of violence during the Second Intifada. We compare several alternative statistical models with different dynamic and structural stability characteristics while keeping modelling complexity to a minimum by only maintaining the assumption that the process under consideration is at most a second order discrete Markov process. For the pooled data, the best model is one with asymmetric dynamics, where one Israeli and two Palestinian lags determine the conditional probability of violence. However, when we allow for structural change, the evidence strongly favors the hypothesis of structural instability across political regime subperiods, within which dynamics are generally weak. 
Keywords:  Bayesian; Conjugate prior; IsraeliPalestinian conflict; Marginal likelihood 
JEL:  C1 C2 
Date:  2007–09 
URL:  http://d.repec.org/n?u=RePEc:irv:wpaper:070801&r=ecm 
By:  Das, J.W.M.; Toepoel, V.; Soest, A.H.O. van (Tilburg University, Center for Economic Research) 
Abstract:  Over the past decades there has been an increasing use of panel surveys at the household or individual level, instead of using independent crosssections. Panel data have important advantages, but there are also two potential drawbacks: attrition bias and panel conditioning effects. Attrition bias can arise if respondents drop out of the panel nonrandomly, i.e., when attrition is correlated to a variable of interest. Panel conditioning arises if responses in one wave are in?uenced by participation in the previous wave(s). The experience of the previous interview(s) may affect the answers of respondents in a next interview on the same topic, such that their answers differ systematically from the answers of individuals who are interviewed for the first time. The literature has mainly focused on estimating attrition bias; less is known on panel conditioning effects. In this study we discuss how to disentangle the total bias in panel surveys due to attrition and panel conditioning into a panel conditioning and an attrition effect, and develop a test for panel conditioning allowing for nonrandom attrition. First, we consider a fully nonparametric approach without any assumptions other than those on the sample design, leading to interval identification of the measures for the attrition and panel conditioning effect. Second, we analyze the proposed measures under additional assumptions concerning the attrition process, making it possible to obtain point estimates and standard errors for both the attrition bias and the panel conditioning effect. We illustrate our method on a variety of questions from twowave surveys conducted in a Dutch household panel. We found a significant bias due to panel conditioning in knowledge questions, but not in other types of questions. The examples show that the bounds can be informative if the attrition rate is not too high. Point estimates of the panel conditioning effect do not vary a lot with the different assumptions on the attrition process. 
Keywords:  panel conditioning;attrition bias;measurement error;panel surveys 
JEL:  C42 C81 C93 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200756&r=ecm 
By:  Michael Greenacre 
Abstract:  Power transformations of positive data tables, prior to applying the correspondence analysis algorithm, are shown to open up a family of methods with direct connections to the analysis of logratios. Two variations of this idea are illustrated. The first approach is simply to power the original data and perform a correspondence analysis – this method is shown to converge to unweighted logratio analysis as the power parameter tends to zero. The second approach is to apply the power transformation to the contingency ratios, that is the values in the table relative to expected values based on the marginals – this method converges to weighted logratio analysis, or the spectral map. Two applications are described: first, a matrix of population genetic data which is inherently twodimensional, and second, a larger crosstabulation with higher dimensionality, from a linguistic analysis of several books. 
Keywords:  BoxCox transformation, chisquare distance, contingency ratio, correspondence analysis, logratio analysis, power transformation, ratio data, singular value decomposition, spectral map 
JEL:  C19 C88 
Date:  2007–08 
URL:  http://d.repec.org/n?u=RePEc:upf:upfgen:1044&r=ecm 
By:  Nikolaus Hautsch (Humboldt University Berlin and CFS) 
Abstract:  We introduce a multivariate multiplicative error model which is driven by componentspecific observation driven dynamics as well as a common latent autoregressive factor. The model is designed to explicitly account for (information driven) common factor dynamics as well as idiosyncratic effects in the processes of highfrequency return volatilities, trade sizes and trading intensities. The model is estimated by simulated maximum likelihood using efficient importance sampling. Analyzing five minutes data from four liquid stocks traded at the New York Stock Exchange, we find that volatilities, volumes and intensities are driven by idiosyncratic dynamics as well as a highly persistent common factor capturing most causal relations and crossdependencies between the individual variables. This confirms economic theory and suggests more parsimonious specifications of highdimensional trading processes. It turns out that common shocks affect the return volatility and the trading volume rather than the trading intensity. 
Keywords:  Net Foreign Assets; Valuation Adjustment; International Financial Integration 
JEL:  C15 C32 C52 
Date:  2007–09–04 
URL:  http://d.repec.org/n?u=RePEc:cfs:cfswop:wp200725&r=ecm 
By:  Stéphane Loisel (SAF  EA2429  Laboratoire de Science Actuarielle et Financière  [Université Claude Bernard  Lyon I]); Christian Mazza (Département de Mathématiques  [Université de Fribourg]); Didier Rullière (SAF  EA2429  Laboratoire de Science Actuarielle et Financière  [Université Claude Bernard  Lyon I]) 
Abstract:  We consider the classical risk model and carry out a sensitivity and robustness analysis of finitetime ruin probabilities. We provide algorithms to compute the related influence functions. We also prove the weak convergence of a sequence of empirical finitetime ruin probabilities starting from zero initial reserve toward a Gaussian random variable. We define the concepts of reliable finitetime ruin probability as a ValueatRisk of the estimator of the finitetime ruin probability. To control this robust risk measure, an additional initial reserve is needed and called Estimation Risk Solvency Margin (ERSM). We apply our results to show how portfolio experience could be rewarded by cutoffs in solvency capital requirements. An application to catastrophe contamination and numerical examples are also developed. 
Keywords:  Finitetime ruin probability; robustness; Solvency II; reliable ruin probability; asymptotic Normality; influence function; Estimation Risk Solvency Margin (ERSM) 
Date:  2007–08–29 
URL:  http://d.repec.org/n?u=RePEc:hal:papers:hal00168714_v1&r=ecm 
By:  Claude Lefèvre (Département de Mathématique  [Université Libre de Bruxelles]); Stéphane Loisel (SAF  EA2429  Laboratoire de Science Actuarielle et Financière  [Université Claude Bernard  Lyon I]) 
Abstract:  This paper is concerned with the problem of ruin in the classical compound binomial and compound Poisson risk models. Our primary purpose is to extend to those models an exact formula derived by Picard and Lefèvre (1997) for the probability of (non)ruin within finite time. First, a standard method based on the ballot theorem and an argument of Sealtype provides an initial (known) formula for that probability. Then, a concept of pseudodistributions for the cumulated claim amounts, combined with some simple implications of the ballot theorem, leads to the desired formula. Two expressions for the (non)ruin probability over an infinite horizon are also deduced as corollaries. Finally, an illustration within the framework of Solvency II is briefly presented. 
Keywords:  ruin probability; finite and infinite horizon; compound binomial model; compound Poisson model; ballot theorem; pseudodistributions; Solvency II; ValueatRisk. 
Date:  2007–08–31 
URL:  http://d.repec.org/n?u=RePEc:hal:papers:hal00168958_v1&r=ecm 
By:  Einmahl, J.H.J.; Khmaladze, E.V. (Tilburg University, Center for Economic Research) 
Abstract:  AMS 2000 subject classifications. 60F05, 60F17, 60G55, 62G30. 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200766&r=ecm 