nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒09‒28
nine papers chosen by
Sune Karlsson
Orebro University

  1. Sparse Linear Models and Two-Stage Estimation in High-Dimensional Settings with Possibly Many Endogenous Regressors By Zhu, Ying
  2. Which Parametric Model for Conditional Skewness? By Bruno Feunou; Mohammad R. Jahan-Parvar; Roméo Tedongap
  3. Testing the Martingale Hypothesis By Peter C.B. Phillips; Sainan Jin
  4. Default Probability Estimation via Pair Copula Constructions By Luciana Dalla Valle; Maria Elena De Giuli; Claudio Manelli; Claudia Tarantola
  5. Credible Granger-Causality Inference with Modest Sample Lengths: A Cross-Sample Validation Approach By Richard A. Ashley; Kwok Ping Tsang
  6. A Comparison of the Finite Sample Properties of Selection Rules of Factor Numbers in Large Datasets By GUO-FITOUSSI, Liang
  7. Bandwith choice for average derivative estimation By HÄRDLE, Wolfgang; HART, Jeffrey; MARRON, Steve; TSYBAKOV, Alexander
  8. Consistently estimating link speed using sparse GPS data with measured errors By Fadaei Oshyani , Masoud; Sundberg , Marcus; Karlström , Anders
  9. Inference of Bidders’ Risk Attitudes in Ascending Auctions with Endogenous Entry By Hanming Fang; Xun Tang

  1. By: Zhu, Ying
    Abstract: This paper explores the validity of the two-stage estimation procedure for sparse linear models in high-dimensional settings with possibly many endogenous regressors. In particular, the number of endogenous regressors in the main equation and the instruments in the first-stage equations can grow with and exceed the sample size n. The analysis concerns the exact sparsity case, i.e., the maximum number of non-zero components in the vectors of parameters in the first-stage equations, k1, and the number of non-zero components in the vector of parameters in the second-stage equation, k2, are allowed to grow with n but slowly compared to n. I consider the high-dimensional version of the two-stage least square estimator where one obtains the fitted regressors from the first-stage regression by a least square estimator with l_1-regularization (the Lasso or Dantzig selector) when the first-stage regression concerns a large number of instruments relative to n, and then construct a similar estimator using these fitted regressors in the second-stage regression. The main theoretical results of this paper are non-asymptotic bounds from which I establish sufficient scaling conditions on the sample size for estimation consistency in l_2-norm and variable-selection consistency (i.e., the two-stage high-dimensional estimators correctly select the non-zero coefficients in the main equation with high probability). A technical issue regarding the so-called "restricted eigenvalue (RE) condition" for estimation consistency and the "mutual incoherence (MI) condition" for selection consistency arises in the two-stage estimation from allowing the number of regressors in the main equation to exceed n and this paper provides analysis to verify these RE and MI conditions. Depending on the underlying assumptions that are imposed, the upper bounds on the l_2-error and the sample size required to obtain these consistency results differ by factors involving k1 and/or k2. Simulations are conducted to gain insight on the finite sample performance of the high-dimensional two-stage estimator.
    Keywords: High-dimensional statistics; Lasso; sparse linear models; endogeneity; two-stage estimation
    JEL: C1 C13 C31 C36
    Date: 2013–09–17
  2. By: Bruno Feunou; Mohammad R. Jahan-Parvar; Roméo Tedongap
    Abstract: This paper addresses an existing gap in the developing literature on conditional skewness. We develop a simple procedure to evaluate parametric conditional skewness models. This procedure is based on regressing the realized skewness measures on model-implied conditional skewness values. We find that an asymmetric GARCH-type specification on shape parameters with a skewed generalized error distribution provides the best in-sample fit for the data, as well as reasonable predictions of the realized skewness measure. Our empirical findings imply significant asymmetry with respect to positive and negative news in both conditional asymmetry and kurtosis processes.
    Keywords: Econometric and statistical methods
    JEL: C22 C51 G12 G15
    Date: 2013
  3. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Sainan Jin (Singapore Management University)
    Abstract: We propose new tests of the martingale hypothesis based on generalized versions of the Kolmogorov-Smirnov and Cramer-von Mises tests. The tests are distribution free and allow for a weak drift in the null model. The methods do not require either smoothing parameters or bootstrap resampling for their implementation and so are well suited to practical work. The paper develops limit theory for the tests under the null and shows that the tests are consistent against a wide class of nonlinear, non-martingale processes. Simulations show that the tests have good finite sample properties in comparison with other tests particularly under conditional heteroskedasticity and mildly explosive alternatives. An empirical application to major exchange rate data finds strong evidence in favor of the martingale hypothesis, confirming much earlier research.
    Keywords: Brownian functional, Martingale hypothesis, Kolmogorov-Smirnov test, Cramer-von Mises test, Explosive process, Exchange rates
    JEL: C12
    Date: 2013–09
  4. By: Luciana Dalla Valle (School of Computing and Mathematics, Plymouth University); Maria Elena De Giuli (Department of Economics and Management, University of Pavia); Claudio Manelli (List S.p.A); Claudia Tarantola (Department of Economics and Management, University of Pavia)
    Abstract: In this paper we present a novel Bayesian approach for default probability estimation. The methodology is based on multivariate contingent claim analysis and pair copula theory. Balance sheet data are used to asses the firm value and to compute its default probability. The firm pricing function is obtained via a pair copula approach, and Monte Carlo simulations are used to calculate the default probability distribution. The methodology is illustrated through an application to defaulted firms data.
    Keywords: Bayesian analysis, Pair Copula, Default Risk, Multivariate Contingent Claim, Markov Chain Monte Carlo, Vines.
    JEL: E02 H63
    Date: 2013–07
  5. By: Richard A. Ashley; Kwok Ping Tsang
    Abstract: Credible Granger-causality analysis appears to require post-sample inference, as it is well-known that in-sample fit can be a poor guide to actual forecasting effectiveness. But post-sample model testing requires an often-consequential a priori partitioning of the data into an 'in-sample' period - purportedly utilized only for model specifi- cation/estimation - and a 'post-sample' period, purportedly utilized (only at the end of the analysis) for model validation/testing purposes. This partitioning is usually infeasible, however, with samples of modest length – e.g., T less than 100 - as is common in both quarterly data sets and/or in monthly data sets where institutional arrange- ments vary over time, simply because there is in such cases insufficient data available to credibly accomplish both purposes separately. A cross-sample validation (CSV) testing procedure is proposed below which substantially ameliorates this predicament - preserving most of the power of in-sample testing (by utilizing all of the sample data in the test), while also retaining most of the credibility of post-sample testing (by al- ways basing model forecasts on data not utilized in estimating that particular model's coefficients). Simulations show that the price paid, in terms of power relative to the in-sample Granger-causality F test, is manageable. An illustrative application is given, to a re-analysis of the Engel and West (2005) study of the causal relationship between macroeconomic fundamentals and the exchange rate.
    Keywords: Time Series, Granger-causality, causality, post-sample testing, exchange rates.
    Date: 2013
  6. By: GUO-FITOUSSI, Liang
    Abstract: Abstract In this paper, we compare the properties of the main criteria proposed for selecting the number of factors in dynamic factor model in a small sample. Both static and dynamic factor numbers' selection rules are studied. Simulations show that the GR ratio proposed by Ahn and Horenstein (2013) and the criterion proposed by Onatski (2010) outperform the others. Furthermore, the two criteria can select accurately the number of static factors in a dynamic factors design. Also, the criteria proposed by Hallin and Liska (2007) and Breitung and Pigorsch (2009) correctly select the number of dynamic factors in most cases. However, empirical application show most criteria select only one factor in presence of one strong factor.
    Keywords: dynamic factor model, factor numbers, small sample
    JEL: C13 C52
    Date: 2013–09
  7. By: HÄRDLE, Wolfgang; HART, Jeffrey; MARRON, Steve; TSYBAKOV, Alexander
  8. By: Fadaei Oshyani , Masoud (KTH); Sundberg , Marcus (KTH); Karlström , Anders (KTH)
    Abstract: Data sources using new technology such as the Geographical Positioning System are increasingly available. In many different applications, it is important to predict the average speed on all the links in a network. The purpose of this study is to estimate the link speed in a network using sparse GPS data set. Average speed is consistently estimated using Indirect Inference approach. In the end, the Monte Carlo evidence is provided to show that the results are consistent with parameter estimates.
    Keywords: Travel time; Sparse GPS data; Indirect inference; Map matching; Route choice.
    JEL: R40
    Date: 2013–09–16
  9. By: Hanming Fang; Xun Tang
    Abstract: Bidders' risk attitudes have key implications for choices of revenue-maximizing auction formats. In ascending auctions, bid distributions do not provide information about risk preference. We infer risk attitudes using distributions of transaction prices and participation decisions in ascending auctions with entry costs. Nonparametric tests are proposed for two distinct scenarios: first, the expected entry cost can be consistently estimated from data; second, the data does not report entry costs but contains exogenous variations of potential competition and auction characteristics. In the first scenario, we exploit the fact that the risk premium required for entry – the difference between ex ante expected profits from entry and the certainty equivalent – is strictly positive if and only if bidders are risk averse. Our test is based on identification of bidders' ex ante profits. In the second scenario, our test builds on the fact that risk attitudes affect how equilibrium entry probabilities vary with observed auction characteristics and potential competition. We also show identification of risk attitudes in a more general model of ascending auctions with selective entry, where bidders receive entry-stage signals that are correlated with private values.
    JEL: C12 C14 D44
    Date: 2013–09

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.