nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒12‒29
eighteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Robust estimation of moment condition models with weakly dependent data By Kirill Evdokimov; Yuichi Kitamura; Taisuke Otsu
  2. Semiparametric GEE Analysis in Partially Linear Single-Index Models for Longitudinal Data By Jia Chen; Degui Li; Hua Liang; Suojin Wang
  3. Statistics of Heteroscedastic Extremes By Einmahl, J.H.J.; de Haan, L.F.M.; Zhou, C.
  4. Testing the maximal rank of the volatility process for continuous diffusions observed with noise By Tobias Fissler; Mark Podolskij
  5. A Simple Adjustment for Bandwidth Snooping By Timothy B. Armstrong; Michal Kolesar
  6. True Limit Distributions of the Anderson-Hsiao IV Estimators in Panel Autoregression By Peter C.B. Phillips; Chirok Han
  7. Estimation of Extreme Depth-Based Quantile Regions By He, Y.; Einmahl, J.H.J.
  8. Comparing several methods to compute joint prediction regions for path forecasts generated by vector autoregressions By Stefan Bruder
  9. Comparing Standard Regression Modeling to Ensemble Modeling: How Data Mining Software Can Improve Economists' Predictions By Joyce P. Jacobsen; Laurence M. Levin; Zachary Tausanovitch
  10. Bootstrap confidence sets under model misspecification By Vladimir Spokoiny; Mayya Zhilova; ;
  11. Valid confidence intervals for post-model-selection predictors By Bachoc, Francois; Leeb, Hannes; Pötscher, Benedikt M.
  12. Lag Order and Critical Values of the Augmented Dickey-Fuller Test: A Replication By Kulaksizoglu, Tamer
  13. Inference about Non-Identified SVARs By Giacomini, Raffaella; Kitagawa, Toru
  14. Evaluation of Credit Risk Under Correlated Defaults: The Cross-Entropy Simulation Approach By Loretta Mastroeni; Giuseppe D'Acquisto; Maurizio Naldi
  15. A note on the Tobit model in the presence of a duration variable By HAFNER, Christian M.; PREMINGER, Arie
  16. Identifying A Screening Model with Multidimensional Private Information By Gaurab Aryal
  17. Replications in Economics: A Progress Report By Maren Duvendack; Richard W. Palmer-Jones; W. Robert Reed
  18. Forecasting with Bayesian Global Vector Autoregressions By Florian Huber; Jesus Crespo-Cuaresma; Martin Feldkircher

  1. By: Kirill Evdokimov; Yuichi Kitamura; Taisuke Otsu
    Abstract: This paper considers robust estimation of moment condition models with time series data. Researchers frequently use moment condition models in dynamic econometric analysis. These models are particularly useful when one wishes to avoid fully parameterizing the dynamics in the data. It is nevertheless desirable to use an estimation method that is robust against deviations from the model assumptions. For example, measurement errors can contaminate observations and thereby lead to such deviations. This is an important issue for time series data: in addition to conventional sources of mismeasurement, it is known that an inappropriate treatment of seasonality can cause serially correlated measurement errors. Efficiency is also a critical issue since time series sample sizes are often limited. This paper addresses these problems. Our estimator has three features: (i) it achieves an asymptotic optimal robust property, (ii) it treats time series dependence nonparametrically by a data blocking technique, and (iii) it is asymptotically as efficient as the optimally weighted GMM if indeed the model assumptions hold. A small scale simulation experiment suggests that our estimator performs favorably compared to other estimators including GMM, thereby supporting our theoretical findings.
    Keywords: Blocking, Generalized Empirical Likelihood, Hellinger Distance, Robustness, Efficient Estimation, Mixing
    JEL: C14
    Date: 2014–12
  2. By: Jia Chen; Degui Li; Hua Liang; Suojin Wang
    Abstract: In this article, we study a partially linear single-index model for longitudinal data under a general framework which includes both the sparse and dense longitudinal data cases. A semiparametric estimation method based on the combination of the local linear smoothing and generalized estimation equations (GEE) is introduced to estimate the two parameter vectors as well as the unknown link function. Under some mild conditions, we derive the asymptotic properties of the proposed parametric and nonparametric estimators in different scenarios, from which we find that the convergence rates and asymptotic variances of the proposed estimators for sparse longitudinal data would be substantially different from those for dense longitudinal data. We also discuss the estimation of the covariance (or weight) matrices involved in the semiparametric GEE method. Furthermore, we provide some numerical studies to illustrate our methodology and theory.
    Keywords: GEE, local linear smoothing, longitudinal data, semiparametric estimation, single-index models.
    JEL: C14 C13 C33
    Date: 2014–04
  3. By: Einmahl, J.H.J. (Tilburg University, Center For Economic Research); de Haan, L.F.M. (Tilburg University, Center For Economic Research); Zhou, C.
    Abstract: Abstract: We extend classical extreme value theory to non-identically distributed observations. When the distribution tails are proportional much of extreme value statistics remains valid. The proportionality function for the tails can be estimated nonparametrically along with the (common) extreme value index. Joint asymptotic normality of both estimators is shown; they are asymptotically independent. We develop tests for the proportionality function and for the validity of the model. We show through simulations the good performance of tests for tail homoscedasticity. The results are applied to stock market returns. A main tool is the weak convergence of a weighted sequential tail empirical process.
    Date: 2014
  4. By: Tobias Fissler (University of Bern); Mark Podolskij (Aarhus University and CREATES)
    Abstract: In this paper, we present a test for the maximal rank of the volatility process in continuous diffusion models observed with noise. Such models are typically applied in mathematical finance, where latent price processes are corrupted by microstructure noise at ultra high frequencies. Using high frequency observations we construct a test statistic for the maximal rank of the time varying stochastic volatility process. Our methodology is based upon a combination of a matrix perturbation approach and pre-averaging. We will show the asymptotic mixed normality of the test statistic and obtain a consistent testing procedure.
    Keywords: continuous Itô semimartingales, high frequency data, microstructure noise, rank testing, stable convergence
    JEL: C10 C13 C14
    Date: 2014–12–10
  5. By: Timothy B. Armstrong (Cowles Foundation, Yale University); Michal Kolesar (Princeton University)
    Abstract: Kernel-based estimators are often evaluated at multiple bandwidths as a form of sensitivity analysis. However, if in the reported results, a researcher selects the bandwidth based on this analysis, the associated confidence intervals may not have correct coverage, even if the estimator is unbiased. This paper proposes a simple adjustment that gives correct coverage in such situations: replace the Normal quantile with a critical value that depends only on the kernel and ratio of the maximum and minimum bandwidths the researcher has entertained. We tabulate these critical values and quantify the loss in coverage for conventional confidence intervals. For a range of relevant cases, a conventional 95% confidence interval has coverage between 70% and 90%, and our adjustment amounts to replacing the conventional critical value 1.96 with a number between 2.2 and 2.8. A Monte Carlo study confirms that our approach gives accurate coverage in finite samples. We illustrate our approach with two empirical applications.
    Keywords: Nonparametric estimation, Multiple testing, Regression discontinuity
    JEL: C01 C14
    Date: 2014
  6. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Chirok Han (Korea University)
    Abstract: This note derives the correct limit distributions of the Anderson Hsiao (1981) levels and differences instrumental variable estimators, provides comparisons showing that the levels IV estimator has uniformly smaller variance asymptotically as the cross section (n) and time series (T) sample sizes tend to infinity, and compares these results with those of the first difference least squares (FDLS) estimator.
    Keywords: Dynamic panel, IV estimation, Levels and difference instruments
    JEL: C23 C36
    Date: 2014–12
  7. By: He, Y. (Tilburg University, Center For Economic Research); Einmahl, J.H.J. (Tilburg University, Center For Economic Research)
    Abstract: Consider the extreme quantile region, induced by the halfspace depth function HD, of the form Q = fx 2 Rd : HD(x; P) g, such that PQ = p for a given, very small p > 0. This region can hardly be estimated through a fully nonparametric procedure since the sample halfspace depth is 0 outside the convex hull of the data. Using Extreme Value Theory, we construct a natural, semiparametric estimator of this quantile region and prove a refined consistency result. A simulation study clearly demonstrates the good performance of our estimator. We use the procedure for risk management by applying it to stock market returns.
    Keywords: Extreme value statistics; halfspace depth; multivariate quantile; outlier detection; rare event; tail dependence
    JEL: C13 C14
    Date: 2014
  8. By: Stefan Bruder
    Abstract: Path forecasts, defined as sequences of individual forecasts, generated by vector autoregressions are widely used in applied work. It has been recognized that a profound econometric analysis requires, besides the path forecast, a joint prediction region that contains the whole future path with a prespecified coverage probability. The forecasting literature offers several different methods of computing joint prediction regions, where the existing methods are either bootstrap based or rely on asymptotic results. The aim of this paper is to investigate the finite-sample performance of three methods for constructing joint prediction regions in various scenarios via Monte Carlo simulations.
    Keywords: Path forecast, joint prediction region, Monte Carlo simulation
    JEL: C15 C32 C53
    Date: 2014–11
  9. By: Joyce P. Jacobsen (Department of Economics, Wesleyan University); Laurence M. Levin (VISA); Zachary Tausanovitch (Network for Teaching Entrepreneurship)
    Abstract: Economists’ wariness of data mining may be misplaced, even in cases where economic theory provides a well-specified model for estimation. We discuss how new data mining/ensemble modeling software, for example the program TreeNet, can be used to create predictive models. We then show how for a standard labor economics problem, the estimation of wage equations, TreeNet outperforms standard OLS regression in terms of lower prediction error. Ensemble modeling also resists the tendency to overfit data. We conclude by considering additional types of economic problems that are well-suited to use of data mining techniques.
    Keywords: data mining, ensemble modeling
    JEL: C14 C51 J31
    Date: 2014–12
  10. By: Vladimir Spokoiny; Mayya Zhilova; ;
    Abstract: A multiplier bootstrap procedure for construction of likelihood-based condence sets is considered for nite samples and a possible model misspecication. Theoretical results justify the bootstrap consistency for a small or moderate sample size and allow to control the impact of the parameter dimension p: the bootstrap approximation works if p3=n is small. The main result about bootstrap consistency continues to apply even if the underlying parametric model is misspecied under the so called Small Modeling Bias condition. In the case when the true model deviates signicantly from the considered parametric family, the bootstrap procedure is still applicable but it becomes a bit conservative: the size of the constructed condence sets is increased by the modeling bias. We illustrate the results with numerical examples for misspecied constant and logistic regressions.
    Keywords: likelihood-based bootstrap condence set, misspecied model, nite sample size, multiplier bootstrap, weighted bootstrap, Gaussian approximation, Pinsker's inequality
    JEL: C13 C15
    Date: 2014–11
  11. By: Bachoc, Francois; Leeb, Hannes; Pötscher, Benedikt M.
    Abstract: We consider inference post-model-selection in linear regression. In this setting, Berk et al.(2013) recently introduced a class of confidence sets, the so-called PoSI intervals, that cover a certain non-standard quantity of interest with a user-specified minimal coverage probability, irrespective of the model selection procedure that is being used. In this paper, we generalize the PoSI intervals to post-model-selection predictors.
    Keywords: Inference post-model-selection, confidence intervals, optimal post-model-selection predictors, non-standard targets, linear regression
    JEL: C1 C2 C52
    Date: 2014–12
  12. By: Kulaksizoglu, Tamer
    Abstract: This paper replicates Cheung and Lai (1995), who use response surface analysis to obtain approximate finite-sample critical values adjusted for lag order and sample size for the augmented Dickey-Fuller test. We obtain results that are quite close to their results. We provide the Ox source code. We also provide a Windows application with a graphical user interface, which makes obtaining custom critical values quite simple.
    Keywords: Finite-sample critical value; Monte Carlo; Response surface
    JEL: C12 C15
    Date: 2014–08–31
  13. By: Giacomini, Raffaella; Kitagawa, Toru
    Abstract: We propose a method for conducting inference on impulse responses in structural vector autoregressions (SVARs) when the impulse response is not point identified because the number of equality restrictions one can credibly impose is not sufficient for point identification and/or one imposes sign restrictions. We proceed in three steps. We first define the object of interest as the identified set for a given impulse response at a given horizon and discuss how inference is simple when the identified set is convex, as one can limit attention to the set's upper and lower bounds. We then provide easily verifiable conditions on the type of equality and sign restrictions that guarantee convexity. These cover most cases of practical interest, with exceptions including sign restrictions on multiple shocks and equality restrictions that make the impulse response locally, but not globally, identified. Second, we show how to conduct inference on the identified set. We adopt a robust Bayes approach that considers the class of all possible priors for the non-identified aspects of the model and delivers a class of associated posteriors. We summarize the posterior class by reporting the "posterior mean bounds", which can be interpreted as an estimator of the identified set. We also consider a "robustified credible region" which is a measure of the posterior uncertainty about the identified set. The two intervals can be obtained using a computationally convenient numerical procedure. Third, we show that the posterior bounds converge asymptotically to the identified set if the set is convex. If the identified set is not convex, our posterior bounds can be interpreted as an estimator of the convex hull of the identified set. Finally, a useful diagnostic tool delivered by our procedure is the posterior belief about the plausibility of the imposed identifying restrictions.
    Keywords: ambiguous beliefs; credible region; partial causal ordering; posterior bounds
    JEL: C11 C54
    Date: 2014–12
  14. By: Loretta Mastroeni; Giuseppe D'Acquisto; Maurizio Naldi
    Abstract: Credit risk, associated to borrowers defaulting on their debts, is an ever growing source of concern for lenders. The presence of correlation among defaults may be described by the t-copula model. However, the typically large number of variables involved calls for a simulation approach. A simulation method, based on the use of the Cross-Entropy (CE) technique, is here proposed as an alternative to non-adaptive Importance Sampling (IS) techniques so far presented in the literature, the main advantage of CE being that it allows to deal easily with a wider range of probability models than ad hoc IS. The method is validated through a comparison of its results with the crude MonteCarlo and the Exponential Twist approaches. The proposed Cross-Entropy technique is shown to provide accurate results even when the sample size is several orders of magnitude smaller than the inverse of the probability to be estimated.
    Keywords: Credit risk, Cross-Entropy, Copula models
    JEL: C15 G32
    Date: 2014–09
  15. By: HAFNER, Christian M. (Université catholique de Louvain, CORE and ISBA, Belgium); PREMINGER, Arie (Ben Gurion University)
    Abstract: The Tobit model (censored regression model) is an important basic model appearing in many applications in economics. In this paper we consider a duration Tobit model in which a duration variable which counts the number of times the data is being censored is included as a covariate. We show that in this case, the dependent variable eventually becomes degenerate, which makes the asymptotic Fisher information matrix singular, rendering the standard methods of asymptotic inference inapplicable. We provide a simulation study and an empirical application to support our results.
    Keywords: limited dependence, censoring, duration, labor supply
    JEL: C24 J64
    Date: 2014–06–11
  16. By: Gaurab Aryal
    Abstract: In this paper I study nonparametric identification of a screening model when consumers have multivariate private information about their preferences. In particular, I consider the multiproduct nonlinear pricing model developed by Rochet and Chon\'e (1998), and determine conditions under which the cost function and the joint density of preferences are identified. When the utility function is nonlinear the model cannot be identified with data from only one market. If there is, however, an exogenous binary cost shifter then we can identify the utility function. Moreover, if we have some consumer characteristics that affect the utility function, then the model is over identified, which leads to a specification test that can be used to check the validity of model. I also characterize all testable restrictions of the model on the data (demand and prices).
    Date: 2014–11
  17. By: Maren Duvendack; Richard W. Palmer-Jones; W. Robert Reed (University of Canterbury)
    Abstract: This study reports on various aspects of replication research in economics. It includes (i) a brief history of data sharing and replication; (ii) the results of the authors’ survey administered to the editors of all 333 “Economics” journals listed in Web of Science in December 2013; (iii) an analysis of 155 replication studies that have been published in peer-reviewed economics journals from 1977-2014; (iv) a discussion of the future of replication research in economics, and (v) observations on how replications can be better integrated into research efforts to address problems associated with publication bias and other Type I error phenomena.
    Keywords: Replication, data sharing, publication bias
    JEL: A1 B4
    Date: 2014–12–03
  18. By: Florian Huber; Jesus Crespo-Cuaresma; Martin Feldkircher
    Abstract: This paper puts forward a Bayesian version of the global vector autoregressive model (B-GVAR) that accommodates international linkages across countries in a system of vec- tor autoregressions. We compare the predictive performance of B-GVAR models for the one- and four-quarter ahead forecast horizon for standard macroeconomic variables (real GDP, inflation, the real exchange rate and interest rates). Our results show that taking international linkages into account improves forecasts of inflation, real GDP and the real exchange rate, while for interest rates forecasts of univariate benchmark models remain difficult to beat. Our Bayesian version of the GVAR model outperforms forecasts of the standard cointegrated VAR for practically all variables and at both forecast horizons. The comparison of prior elicitation strategies indicates that the use of the stochastic search variable selection (SSVS) prior tends to improve out-of-sample predictions systematically. This finding is confirmed by density forecast measures, for which the predictive ability of the SSVS prior is the best among all priors entertained for all variables at all forecasting horizons.
    Keywords: Global vector autoregressions; forecasting; prior sensitivity analysis;
    JEL: C32 F44 E32 O54
    Date: 2014–11

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.