
on Econometrics 
By:  Kirill Evdokimov; Yuichi Kitamura; Taisuke Otsu 
Abstract:  This paper considers robust estimation of moment condition models with time series data. Researchers frequently use moment condition models in dynamic econometric analysis. These models are particularly useful when one wishes to avoid fully parameterizing the dynamics in the data. It is nevertheless desirable to use an estimation method that is robust against deviations from the model assumptions. For example, measurement errors can contaminate observations and thereby lead to such deviations. This is an important issue for time series data: in addition to conventional sources of mismeasurement, it is known that an inappropriate treatment of seasonality can cause serially correlated measurement errors. Efficiency is also a critical issue since time series sample sizes are often limited. This paper addresses these problems. Our estimator has three features: (i) it achieves an asymptotic optimal robust property, (ii) it treats time series dependence nonparametrically by a data blocking technique, and (iii) it is asymptotically as efficient as the optimally weighted GMM if indeed the model assumptions hold. A small scale simulation experiment suggests that our estimator performs favorably compared to other estimators including GMM, thereby supporting our theoretical findings. 
Keywords:  Blocking, Generalized Empirical Likelihood, Hellinger Distance, Robustness, Efficient Estimation, Mixing 
JEL:  C14 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2014/579&r=ecm 
By:  Jia Chen; Degui Li; Hua Liang; Suojin Wang 
Abstract:  In this article, we study a partially linear singleindex model for longitudinal data under a general framework which includes both the sparse and dense longitudinal data cases. A semiparametric estimation method based on the combination of the local linear smoothing and generalized estimation equations (GEE) is introduced to estimate the two parameter vectors as well as the unknown link function. Under some mild conditions, we derive the asymptotic properties of the proposed parametric and nonparametric estimators in different scenarios, from which we find that the convergence rates and asymptotic variances of the proposed estimators for sparse longitudinal data would be substantially different from those for dense longitudinal data. We also discuss the estimation of the covariance (or weight) matrices involved in the semiparametric GEE method. Furthermore, we provide some numerical studies to illustrate our methodology and theory. 
Keywords:  GEE, local linear smoothing, longitudinal data, semiparametric estimation, singleindex models. 
JEL:  C14 C13 C33 
Date:  2014–04 
URL:  http://d.repec.org/n?u=RePEc:yor:yorken:14/26&r=ecm 
By:  Einmahl, J.H.J. (Tilburg University, Center For Economic Research); de Haan, L.F.M. (Tilburg University, Center For Economic Research); Zhou, C. 
Abstract:  Abstract: We extend classical extreme value theory to nonidentically distributed observations. When the distribution tails are proportional much of extreme value statistics remains valid. The proportionality function for the tails can be estimated nonparametrically along with the (common) extreme value index. Joint asymptotic normality of both estimators is shown; they are asymptotically independent. We develop tests for the proportionality function and for the validity of the model. We show through simulations the good performance of tests for tail homoscedasticity. The results are applied to stock market returns. A main tool is the weak convergence of a weighted sequential tail empirical process. 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiucen:19952ae425ff4e1b8627d21d7a62375b&r=ecm 
By:  Tobias Fissler (University of Bern); Mark Podolskij (Aarhus University and CREATES) 
Abstract:  In this paper, we present a test for the maximal rank of the volatility process in continuous diffusion models observed with noise. Such models are typically applied in mathematical finance, where latent price processes are corrupted by microstructure noise at ultra high frequencies. Using high frequency observations we construct a test statistic for the maximal rank of the time varying stochastic volatility process. Our methodology is based upon a combination of a matrix perturbation approach and preaveraging. We will show the asymptotic mixed normality of the test statistic and obtain a consistent testing procedure. 
Keywords:  continuous Itô semimartingales, high frequency data, microstructure noise, rank testing, stable convergence 
JEL:  C10 C13 C14 
Date:  2014–12–10 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201452&r=ecm 
By:  Timothy B. Armstrong (Cowles Foundation, Yale University); Michal Kolesar (Princeton University) 
Abstract:  Kernelbased estimators are often evaluated at multiple bandwidths as a form of sensitivity analysis. However, if in the reported results, a researcher selects the bandwidth based on this analysis, the associated confidence intervals may not have correct coverage, even if the estimator is unbiased. This paper proposes a simple adjustment that gives correct coverage in such situations: replace the Normal quantile with a critical value that depends only on the kernel and ratio of the maximum and minimum bandwidths the researcher has entertained. We tabulate these critical values and quantify the loss in coverage for conventional confidence intervals. For a range of relevant cases, a conventional 95% confidence interval has coverage between 70% and 90%, and our adjustment amounts to replacing the conventional critical value 1.96 with a number between 2.2 and 2.8. A Monte Carlo study confirms that our approach gives accurate coverage in finite samples. We illustrate our approach with two empirical applications. 
Keywords:  Nonparametric estimation, Multiple testing, Regression discontinuity 
JEL:  C01 C14 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1961&r=ecm 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); Chirok Han (Korea University) 
Abstract:  This note derives the correct limit distributions of the Anderson Hsiao (1981) levels and differences instrumental variable estimators, provides comparisons showing that the levels IV estimator has uniformly smaller variance asymptotically as the cross section (n) and time series (T) sample sizes tend to infinity, and compares these results with those of the first difference least squares (FDLS) estimator. 
Keywords:  Dynamic panel, IV estimation, Levels and difference instruments 
JEL:  C23 C36 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1963&r=ecm 
By:  He, Y. (Tilburg University, Center For Economic Research); Einmahl, J.H.J. (Tilburg University, Center For Economic Research) 
Abstract:  Consider the extreme quantile region, induced by the halfspace depth function HD, of the form Q = fx 2 Rd : HD(x; P) g, such that PQ = p for a given, very small p > 0. This region can hardly be estimated through a fully nonparametric procedure since the sample halfspace depth is 0 outside the convex hull of the data. Using Extreme Value Theory, we construct a natural, semiparametric estimator of this quantile region and prove a refined consistency result. A simulation study clearly demonstrates the good performance of our estimator. We use the procedure for risk management by applying it to stock market returns. 
Keywords:  Extreme value statistics; halfspace depth; multivariate quantile; outlier detection; rare event; tail dependence 
JEL:  C13 C14 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiucen:d6529c8a88654c03a064a63fd5097ffd&r=ecm 
By:  Stefan Bruder 
Abstract:  Path forecasts, defined as sequences of individual forecasts, generated by vector autoregressions are widely used in applied work. It has been recognized that a profound econometric analysis requires, besides the path forecast, a joint prediction region that contains the whole future path with a prespecified coverage probability. The forecasting literature offers several different methods of computing joint prediction regions, where the existing methods are either bootstrap based or rely on asymptotic results. The aim of this paper is to investigate the finitesample performance of three methods for constructing joint prediction regions in various scenarios via Monte Carlo simulations. 
Keywords:  Path forecast, joint prediction region, Monte Carlo simulation 
JEL:  C15 C32 C53 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:zur:econwp:181&r=ecm 
By:  Joyce P. Jacobsen (Department of Economics, Wesleyan University); Laurence M. Levin (VISA); Zachary Tausanovitch (Network for Teaching Entrepreneurship) 
Abstract:  Economists’ wariness of data mining may be misplaced, even in cases where economic theory provides a wellspecified model for estimation. We discuss how new data mining/ensemble modeling software, for example the program TreeNet, can be used to create predictive models. We then show how for a standard labor economics problem, the estimation of wage equations, TreeNet outperforms standard OLS regression in terms of lower prediction error. Ensemble modeling also resists the tendency to overfit data. We conclude by considering additional types of economic problems that are wellsuited to use of data mining techniques. 
Keywords:  data mining, ensemble modeling 
JEL:  C14 C51 J31 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:wes:weswpa:2014003&r=ecm 
By:  Vladimir Spokoiny; Mayya Zhilova; ; 
Abstract:  A multiplier bootstrap procedure for construction of likelihoodbased condence sets is considered for nite samples and a possible model misspecication. Theoretical results justify the bootstrap consistency for a small or moderate sample size and allow to control the impact of the parameter dimension p: the bootstrap approximation works if p3=n is small. The main result about bootstrap consistency continues to apply even if the underlying parametric model is misspecied under the so called Small Modeling Bias condition. In the case when the true model deviates signicantly from the considered parametric family, the bootstrap procedure is still applicable but it becomes a bit conservative: the size of the constructed condence sets is increased by the modeling bias. We illustrate the results with numerical examples for misspecied constant and logistic regressions. 
Keywords:  likelihoodbased bootstrap condence set, misspecied model, nite sample size, multiplier bootstrap, weighted bootstrap, Gaussian approximation, Pinsker's inequality 
JEL:  C13 C15 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2014067&r=ecm 
By:  Bachoc, Francois; Leeb, Hannes; Pötscher, Benedikt M. 
Abstract:  We consider inference postmodelselection in linear regression. In this setting, Berk et al.(2013) recently introduced a class of confidence sets, the socalled PoSI intervals, that cover a certain nonstandard quantity of interest with a userspecified minimal coverage probability, irrespective of the model selection procedure that is being used. In this paper, we generalize the PoSI intervals to postmodelselection predictors. 
Keywords:  Inference postmodelselection, confidence intervals, optimal postmodelselection predictors, nonstandard targets, linear regression 
JEL:  C1 C2 C52 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:60643&r=ecm 
By:  Kulaksizoglu, Tamer 
Abstract:  This paper replicates Cheung and Lai (1995), who use response surface analysis to obtain approximate finitesample critical values adjusted for lag order and sample size for the augmented DickeyFuller test. We obtain results that are quite close to their results. We provide the Ox source code. We also provide a Windows application with a graphical user interface, which makes obtaining custom critical values quite simple. 
Keywords:  Finitesample critical value; Monte Carlo; Response surface 
JEL:  C12 C15 
Date:  2014–08–31 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:60456&r=ecm 
By:  Giacomini, Raffaella; Kitagawa, Toru 
Abstract:  We propose a method for conducting inference on impulse responses in structural vector autoregressions (SVARs) when the impulse response is not point identified because the number of equality restrictions one can credibly impose is not sufficient for point identification and/or one imposes sign restrictions. We proceed in three steps. We first define the object of interest as the identified set for a given impulse response at a given horizon and discuss how inference is simple when the identified set is convex, as one can limit attention to the set's upper and lower bounds. We then provide easily verifiable conditions on the type of equality and sign restrictions that guarantee convexity. These cover most cases of practical interest, with exceptions including sign restrictions on multiple shocks and equality restrictions that make the impulse response locally, but not globally, identified. Second, we show how to conduct inference on the identified set. We adopt a robust Bayes approach that considers the class of all possible priors for the nonidentified aspects of the model and delivers a class of associated posteriors. We summarize the posterior class by reporting the "posterior mean bounds", which can be interpreted as an estimator of the identified set. We also consider a "robustified credible region" which is a measure of the posterior uncertainty about the identified set. The two intervals can be obtained using a computationally convenient numerical procedure. Third, we show that the posterior bounds converge asymptotically to the identified set if the set is convex. If the identified set is not convex, our posterior bounds can be interpreted as an estimator of the convex hull of the identified set. Finally, a useful diagnostic tool delivered by our procedure is the posterior belief about the plausibility of the imposed identifying restrictions. 
Keywords:  ambiguous beliefs; credible region; partial causal ordering; posterior bounds 
JEL:  C11 C54 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:10287&r=ecm 
By:  Loretta Mastroeni; Giuseppe D'Acquisto; Maurizio Naldi 
Abstract:  Credit risk, associated to borrowers defaulting on their debts, is an ever growing source of concern for lenders. The presence of correlation among defaults may be described by the tcopula model. However, the typically large number of variables involved calls for a simulation approach. A simulation method, based on the use of the CrossEntropy (CE) technique, is here proposed as an alternative to nonadaptive Importance Sampling (IS) techniques so far presented in the literature, the main advantage of CE being that it allows to deal easily with a wider range of probability models than ad hoc IS. The method is validated through a comparison of its results with the crude MonteCarlo and the Exponential Twist approaches. The proposed CrossEntropy technique is shown to provide accurate results even when the sample size is several orders of magnitude smaller than the inverse of the probability to be estimated. 
Keywords:  Credit risk, CrossEntropy, Copula models 
JEL:  C15 G32 
Date:  2014–09 
URL:  http://d.repec.org/n?u=RePEc:rtr:wpaper:0193&r=ecm 
By:  HAFNER, Christian M. (Université catholique de Louvain, CORE and ISBA, Belgium); PREMINGER, Arie (Ben Gurion University) 
Abstract:  The Tobit model (censored regression model) is an important basic model appearing in many applications in economics. In this paper we consider a duration Tobit model in which a duration variable which counts the number of times the data is being censored is included as a covariate. We show that in this case, the dependent variable eventually becomes degenerate, which makes the asymptotic Fisher information matrix singular, rendering the standard methods of asymptotic inference inapplicable. We provide a simulation study and an empirical application to support our results. 
Keywords:  limited dependence, censoring, duration, labor supply 
JEL:  C24 J64 
Date:  2014–06–11 
URL:  http://d.repec.org/n?u=RePEc:cor:louvco:2014013&r=ecm 
By:  Gaurab Aryal 
Abstract:  In this paper I study nonparametric identification of a screening model when consumers have multivariate private information about their preferences. In particular, I consider the multiproduct nonlinear pricing model developed by Rochet and Chon\'e (1998), and determine conditions under which the cost function and the joint density of preferences are identified. When the utility function is nonlinear the model cannot be identified with data from only one market. If there is, however, an exogenous binary cost shifter then we can identify the utility function. Moreover, if we have some consumer characteristics that affect the utility function, then the model is over identified, which leads to a specification test that can be used to check the validity of model. I also characterize all testable restrictions of the model on the data (demand and prices). 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1411.6250&r=ecm 
By:  Maren Duvendack; Richard W. PalmerJones; W. Robert Reed (University of Canterbury) 
Abstract:  This study reports on various aspects of replication research in economics. It includes (i) a brief history of data sharing and replication; (ii) the results of the authors’ survey administered to the editors of all 333 “Economics” journals listed in Web of Science in December 2013; (iii) an analysis of 155 replication studies that have been published in peerreviewed economics journals from 19772014; (iv) a discussion of the future of replication research in economics, and (v) observations on how replications can be better integrated into research efforts to address problems associated with publication bias and other Type I error phenomena. 
Keywords:  Replication, data sharing, publication bias 
JEL:  A1 B4 
Date:  2014–12–03 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:14/26&r=ecm 
By:  Florian Huber; Jesus CrespoCuaresma; Martin Feldkircher 
Abstract:  This paper puts forward a Bayesian version of the global vector autoregressive model (BGVAR) that accommodates international linkages across countries in a system of vec tor autoregressions. We compare the predictive performance of BGVAR models for the one and fourquarter ahead forecast horizon for standard macroeconomic variables (real GDP, inflation, the real exchange rate and interest rates). Our results show that taking international linkages into account improves forecasts of inflation, real GDP and the real exchange rate, while for interest rates forecasts of univariate benchmark models remain difficult to beat. Our Bayesian version of the GVAR model outperforms forecasts of the standard cointegrated VAR for practically all variables and at both forecast horizons. The comparison of prior elicitation strategies indicates that the use of the stochastic search variable selection (SSVS) prior tends to improve outofsample predictions systematically. This finding is confirmed by density forecast measures, for which the predictive ability of the SSVS prior is the best among all priors entertained for all variables at all forecasting horizons. 
Keywords:  Global vector autoregressions; forecasting; prior sensitivity analysis; 
JEL:  C32 F44 E32 O54 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa14p25&r=ecm 