nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒12‒04
thirteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Preliminary Test Estimation for Multi-Sample Principal Components By Davy Paindaveine; Rondrotiana J Rasoafaraniaina; Thomas Verdebout
  2. Foreign Exchange Intervention Revisited: A New Way of Estimating Censored Models By Daniel Ordoñez-Callamand; Mauricio Villamizar-Villegas; Luis F. Melo-Velandia
  3. Discretizing Unobserved Heterogeneity By Thibaut Lamadon; Elena Manresa; Stephane Bonhomme
  4. The Factor-Lasso and K-Step Bootstrap Approach for Inference in High-Dimensional Economic Applications By Christian Hansen; Yuan Liao
  5. A note on the impact of multiple input aggregators in technical efficiency estimation By Aldanondo, Ana M.; Casasnovas, Valero L.
  6. A coupled component GARCH model for intraday and overnight volatility By Linton, O.; Wu, J.
  7. Network Quantile Autoregression By Xuening Zhu; Wolfgang K. Härdle; Weining Wang; Hangsheng Wang
  8. Bounds on Treatment Effects in Regression Discontinuity Designs under Manipulation of the Running Variable, with an Application to Unemployment Insurance in Brazil By Gerard, Francois; Rokkanen, Miikka; Rothe, Christoph
  9. On Wigner-Ville Spectra and the Unicity of Time-Varying Quantile-Based Spectral Densities By Stefan Birr; Holger Dette; Marc Hallin; Tobias Kley; Stanislav Volgushev
  10. The dynamic factor network model with an application to global credit risk By Brauning, Falk; Koopman, Siem Jan
  11. Dynamic Topic Modelling for Cryptocurrency Community Forums By Marco Linton; Wolfgang K. Härdle; Ernie Gin Swee Teo; Elisabeth Bommes; Cathy Yi-Hsuan Chen
  12. CLT for Lipschitz-Killing curvatures of excursion sets of Gaussian random fields By Marie Kratz; Sreekar Vadlamani
  13. Beta-boosted ensemble for big credit scoring data By Maciej Zieba; Wolfgang K. Härdle

  1. By: Davy Paindaveine; Rondrotiana J Rasoafaraniaina; Thomas Verdebout
    Abstract: In this paper, we consider point estimation in a multi-sample principal components setup, in a situation where it is suspected that the hypothesis of common principal components (CPC) holds. We propose preliminary test estimators of the various principal eigenvectors. We derive their asymptotic distributions (i) under the CPC hypothesis, (ii) under sequences of hypotheses that are contiguous to the CPC hypothesis, and (iii) away from the CPC hypothesis. We conduct a Monte-Carlo study that shows that the proposed estimators perform well, particularly so in the Gaussian case.
    Keywords: preliminary test estimation; common principal components
    Date: 2016–11
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/240546&r=ecm
  2. By: Daniel Ordoñez-Callamand; Mauricio Villamizar-Villegas (Banco de la República de Colombia); Luis F. Melo-Velandia (Banco de la República de Colombia)
    Abstract: Most of the literature on the effectiveness of foreign exchange intervention has yet to reach a general consensus. In part, this is due to the different estimation methods in which exogenous variation is identified. In this sense, the use of heavily-dependent parametric models can sometimes condition the validity of results. In this paper we allow for a more flexible estimation of policy functions by using a censored least absolute deviation model, applied to a time-series framework. We first corroborate the properties of the estimators that we use through simulation exercises for cases in which: (i) the degree of censoring varies, (ii) errors are subject to conditional heteroskedasticity, (iii) the distribution of the errors varies, and (iv) when there are multiple censoring thresholds. The simulation exercises suggest that the estimator used is robust to both conditional heterosckedasticity and heavy tailed distributions in the error term. However, we show that misspecification, when considering a single-valued instead of multiple-valued thresholds, leads to an estimation bias. Finally, we carry out empirical estimations for the case of Turkey and Colombia and compare our findings with the related literature. Classification JEL: C14, C22, C24, E58, F31
    Keywords: CLAD; Censored models; Foreign exchange intervention; Central bank's policy function
    Date: 2016–11
    URL: http://d.repec.org/n?u=RePEc:bdr:borrec:972&r=ecm
  3. By: Thibaut Lamadon (University of Chicago); Elena Manresa (Massachusetts Institute of Technologhy S); Stephane Bonhomme (University of Chicago)
    Abstract: We develop two-step and iterative panel data estimators based on a discretization of unobserved heterogeneity. We view discrete estimators as approximations, and study their properties in environments where population heterogeneity is individual-specific and un- restricted, letting the number of types grow with the sample size. Bias reduction methods can improve the performance of discrete estimators. We also show that discrete estimation may strictly dominate fixed-effects approaches when unobservables are high-dimensional, provided their underlying dimension is low. We study two applications: a structural dy- namic discrete choice model of migration, and a model of wage determination with worker and firm heterogeneity. These applications to settings with continuous heterogeneity sug- gest computational and statistical advantages of the discrete methods that we advocate.
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:red:sed016:1536&r=ecm
  4. By: Christian Hansen (Booth School of Business, University of Chicago); Yuan Liao (Department of Economics, Rutgers University)
    Abstract: We consider inference about coefficients on a small number of variables of interest in a linear panel data model with additive unobserved individual and time specific effects and a large number of additional time-varying confounding variables. We allow the number of these additional confounding variables to be larger than the sample size, and suppose that, in addition to unrestricted time and individual specific effects, these confounding variables are generated by a small number of common factors and high-dimensional weakly-dependent disturbances. We allow that both the factors and the disturbances are related to the outcome variable and other variables of interest. To make informative inference feasible, we impose that the contribution of the part of the confounding variables not captured by time specific effects, individual specific effects, or the common factors can be captured by a relatively small number of terms whose identities are unknown. Within this framework, we provide a convenient computational algorithm based on factor extraction followed by lasso regression for inference about parameters of interest and show that the resulting procedure has good asymptotic properties. We also provide a simple k-step bootstrap procedure that may be used to construct inferential statements about parameters of interest and prove its asymptotic validity. The proposed bootstrap may be of substantive independent interest outside of the present context as the proposed bootstrap may readily be adapted to other contexts involving inference after lasso variable selection and the proof of its validity requires some new technical arguments. We also provide simulation evidence about performance of our procedure and illustrate its use in two empirical applications.
    Keywords: treatment effects, panel data
    JEL: C33
    Date: 2016–11–29
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201610&r=ecm
  5. By: Aldanondo, Ana M.; Casasnovas, Valero L.
    Abstract: The results of an experiment with simulated data show that using multiple positive lineal aggregators of the same inputs instead of the original variables increases the accuracy of the Data Envelopment Analysis (DEA) technical efficiency estimator in data sets beset by dimensionality problems. Aggregation of the inputs achieves more than the mere reduction of the number of variables, since replacement of the original inputs with an equal number of aggregates improves DEA performance in a wide range of cases
    Keywords: Technical efficiency, Aggregation bias, Monte Carlo, DEA Estimator accuracy
    JEL: C14 C61 D20
    Date: 2016–01–25
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:75290&r=ecm
  6. By: Linton, O.; Wu, J.
    Abstract: We propose a semi-parametric coupled component GARCH model for intraday and overnight volatility that allows the two periods to have different properties. To capture the very heavy tails of overnight returns, we adopt a dynamic conditional score model with t innovations. We propose a several step estimation procedure that captures the nonparametric slowly moving components by kernel estimation and the dynamic parameters by t maximum likelihood. We establish the consistency and asymptotic normality of our estimation procedures. We extend the modelling to the multivariate case. We apply our model to the study of the component stocks of the Dow Jones industrial average over the period 1991-2016. We show that actually overnight volatility has increased in importance during this period. In addition, our model provides better intraday volatility forecast since it takes account of the full dynamic consequences of the overnight shock and previous ones.
    Date: 2016–12–01
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1671&r=ecm
  7. By: Xuening Zhu; Wolfgang K. Härdle; Weining Wang; Hangsheng Wang
    Abstract: It is a challenging task to understand the complex dependency structures in an ultra-high dimensional network, especially when one concentrates on the tail dependency. To tackle this problem, we consider a network quantile autoregres- sion model (NQAR) to characterize the dynamic quantile behavior in a complex system. In particular, we relate responses to its connected nodes and node spe- ci c characteristics in a quantile autoregression process. A minimum contrast estimation approach for the NQAR model is introduced, and the asymptotic properties are studied. Finally, we demonstrate the usage of our model by in- vestigating the nancial contagions in the Chinese stock market accounting for shared ownership of companies.
    JEL: C12
    Date: 2016–11
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2016-050&r=ecm
  8. By: Gerard, Francois; Rokkanen, Miikka; Rothe, Christoph
    Abstract: A key assumption in regression discontinuity analysis is that units cannot affect the value of their running variable through strategic behavior, or manipulation, in a way that leads to sorting on unobservable characteristics around the cutoff. Standard identification arguments break down if this condition is violated. This paper shows that treatment effects remain partially identified under weak assumptions on individuals' behavior in this case. We derive sharp bounds on causal parameters for both sharp and fuzzy designs, and show how additional structure can be used to further narrow the bounds. We use our methods to study the disincentive effect of unemployment insurance on (formal) reemployment in Brazil, where we find evidence of manipulation at an eligibility cutoff. Our bounds remain informative, despite the fact that manipulation has a sizable effect on our estimates of causal parameters.
    Keywords: manipulation; Regression Discontinuity; Unemployment insurance
    JEL: C14 C21 C31 J65
    Date: 2016–11
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:11668&r=ecm
  9. By: Stefan Birr; Holger Dette; Marc Hallin; Tobias Kley; Stanislav Volgushev
    Abstract: The unicity of the time-varying quantile-based spectrum proposed in Birr et al. (2016) is established via an asymptotic representation result involving Wigner-Ville spectra.
    Keywords: copula-based spectrum; laplace spectrum; quantile-based spectrum; time-varying spectrum; wigner-ville spectrum
    Date: 2016–11
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/240522&r=ecm
  10. By: Brauning, Falk (Federal Reserve Bank of Boston); Koopman, Siem Jan (Vrije Universiteit Amsterdam)
    Abstract: We introduce a dynamic network model with probabilistic link functions that depend on stochastically time-varying parameters. We adopt the widely used blockmodel framework and allow the high-dimensional vector of link probabilities to be a function of a low-dimensional set of dynamic factors. The resulting dynamic factor network model is straightforward and transparent by nature. However, parameter estimation, signal extraction of the dynamic factors, and the econometric analysis generally are intricate tasks for which simulation-based methods are needed. We provide feasible and practical solutions to these challenging tasks, based on a computationally efficient importance sampling procedure to evaluate the likelihood function. A Monte Carlo study is carried out to provide evidence of how well the methods work. In an empirical study, we use the novel framework to analyze a database of significance-flags of Granger causality tests for pair-wise credit default swap spreads of 61 different banks from the United States and Europe. Based on our model, we recover two groups that we characterize as “local” and “international” banks. The credit-risk spillovers take place between banks, from the same and from different groups, but the intensities change over time as we have witnessed during the financial crisis and the sovereign debt crisis.
    Keywords: network analysis; dynamic factor models; blockmodels; credit-risk spillovers
    JEL: C32 C58 G15
    Date: 2016–10–31
    URL: http://d.repec.org/n?u=RePEc:fip:fedbwp:16-13&r=ecm
  11. By: Marco Linton; Wolfgang K. Härdle; Ernie Gin Swee Teo; Elisabeth Bommes; Cathy Yi-Hsuan Chen
    Abstract: Cryptocurrencies are more and more used in ocial cash ows and exchange of goods. Bitcoin and the underlying blockchain technology have been looked at by big companies that are adopting and investing in this technology. The CRIX Index of cryptocurrencies hu.berlin/CRIX indicates a wider acceptance of cryptos. One reason for its prosperity certainly being a security aspect, since the underlying network of cryptos is decentralized. It is also unregulated and highly volatile, making the risk assessment at any given moment dicult. In message boards one nds a huge source of information in the form of unstructured text written by e.g. Bitcoin developers and investors. We collect from a popular crypto currency message board texts, user information and associated time stamps. We then provide an indicator for fraudulent schemes. This indicator is constructed using dynamic topic modelling, text mining and unsupervised machine learning. We study how opinions and the evolution of topics are connected with big events in the cryptocurrency universe. Furthermore, the predictive power of these techniques are investigated, comparing the results to known events in the cryptocurrency space. We also test hypothesis of self-fulling prophecies and herding behaviour using the results.
    JEL: C19 G10
    Date: 2016–11
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2016-051&r=ecm
  12. By: Marie Kratz (ESSEC Business School - Essec Business School, MAP5 - MAP5 - Mathématiques Appliquées à Paris 5 - UPD5 - Université Paris Descartes - Paris 5 - Institut National des Sciences Mathématiques et de leurs Interactions - CNRS - Centre National de la Recherche Scientifique); Sreekar Vadlamani (TIFR-CAM - Center for Applicable Mathematics - TIFR - Tata Institute of Fundamental Research [Bombay])
    Abstract: Our interest in this paper is to explore limit theorems for various geometric function-als of excursion sets of isotropic Gaussian random fields. In the past, limit theorems have been proven for various geometric functionals of excursion sets/sojourn times (see [4, 13, 14, 18, 22, 25] for a sample of works in such settings). The most recent addition being [6] where a central limit theorem (CLT) for Euler-Poincaré characteristic of the excursions set of a Gaussian random field is proven under appropriate conditions. In this paper, we obtain a CLT for some global geometric functionals, called the Lipschitz-Killing curvatures of excursion sets of Gaussian random fields in an appropriate setting.
    Keywords: chaos expansion,CLT,excursion sets,Gaussian fields,Lipschitz-Killing curvatures
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01373091&r=ecm
  13. By: Maciej Zieba; Wolfgang K. Härdle
    Abstract: In this work we present a novel ensemble model for a credit scoring problem. The main idea of the approach is to incorporate separate beta binomial distributions for each of the classes to generate balanced datasets that are further used to construct base learners that constitute the final ensemble model. The sampling procedure is performed on two separate ranking lists, each for one class, where the ranking is based on prepotency of observing positive class. Two strategies are considered: one assumes mining easy examples and the second one forces good classification of hard cases. The proposed solutions are tested on two big datasets on credit scoring.
    JEL: C53
    Date: 2016–11
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2016-052&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.