nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒10‒01
nineteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Change-Point Testing and Estimation for Risk Measures in Time Series By Lin Fan; Peter W. Glynn; Markus Pelger
  2. A Bayesian GED-Gamma stochastic volatility model for return data: a marginal likelihood approach By T. R. Santos
  3. Regression Discontinuity Designs Using Covariates By Sebastian Calonico; Matias D. Cattaneo; Max H. Farrell; Rocio Titiunik
  4. Estimating grouped data models with a binary dependent variable and fixed effects: What are the issues By Nathaniel Beck
  5. Quantile co-movement in financial markets: A panel quantile model with unobserved heterogeneity By Ando, Tomohiro; Bai, Jushan
  6. Control Variables, Discrete Instruments, and Identification of Structural Functions By Whitney Newey; Sami Stouli
  7. Time-invariant Regressors under Fixed Effects: Identification via a Proxy Variable By Matej Belin
  8. Shape-Enforcing Operators for Point and Interval Estimators By Xi Chen; Victor Chernozhukov; Iv\'an Fern\'andez-Val; Scott Kostyshak; Ye Luo
  9. Control Variables, Discrete Instruments, and Identification of Structural Functions By Whitney Newey; Sami Stouli
  10. The Identification Zoo - Meanings of Identification in Econometrics By Arthur Lewbel
  11. On the Choice of Instruments in Mixed Frequency Specification Tests By Yun Liu; Yeonwoo Rho
  12. Bayesian shrinkage in mixture of experts models: Identifying robust determinants of class membership By Gregor Zens
  13. Challenges in Implementing Worst-Case Analysis By Jon Danielsson; Lerby Ergun; Casper G. de Vries
  14. Women’s Empowerment and Family Health: Estimating LATE with Mismeasured Treatment By Rossella Calvi; Arthur Lewbel; Denni Tommasi
  15. Non-Gaussian Stochastic Volatility Model with Jumps via Gibbs Sampler By Arthur T. Rego; Thiago R. dos Santos
  16. Efficient Difference-in-Differences Estimation with High-Dimensional Common Trend Confounding By Michael Zimmert
  17. Difference-in-Differences with Variation in Treatment Timing By Andrew Goodman-Bacon
  18. Exponent of Cross-sectional Dependence for Residuals By Natalia Bailey; George Kapetanios; M. Hashem Pesaran
  19. Bias Correction of Welfare measures in Non-Market Valuation: Comparison of the Delta Method, Jackknife and Bootstrap By Zhang, Rui; Shonkwiler, J. Scott

  1. By: Lin Fan; Peter W. Glynn; Markus Pelger
    Abstract: We investigate methods of change-point testing and confidence interval construction for nonparametric estimators of expected shortfall and related risk measures in weakly dependent time series. A key aspect of our work is the ability to detect general multiple structural changes in the tails of time series marginal distributions. Unlike extant approaches for detecting tail structural changes using quantities such as tail index, our approach does not require parametric modeling of the tail and detects more general changes in the tail. Additionally, our methods are based on the recently introduced self-normalization technique for time series, allowing for statistical analysis without the issues of consistent standard error estimation. The theoretical foundation for our methods are functional central limit theorems, which we develop under weak assumptions. An empirical study of S&P 500 returns and US 30-Year Treasury bonds illustrates the practical use of our methods in detecting and quantifying market instability via the tails of financial time series during times of financial crisis.
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.02303&r=ecm
  2. By: T. R. Santos
    Abstract: Several studies explore inferences based on stochastic volatility (SV) models, taking into account the stylized facts of return data. The common problem is that the latent parameters of many volatility models are high-dimensional and analytically intractable, which means inferences require approximations using, for example, the Markov Chain Monte Carlo or Laplace methods. Some SV models are expressed as a linear Gaussian state-space model that leads to a marginal likelihood, reducing the dimensionality of the problem. Others are not linearized, and the latent parameters are integrated out. However, these present a quite restrictive evolution equation. Thus, we propose a Bayesian GED-Gamma SV model with a direct marginal likelihood that is a product of the generalized Student's t-distributions in which the latent states are related across time through a stationary Gaussian evolution equation. Then, an approximation is made for the prior distribution of log-precision/volatility, without the need for model linearization. This also allows for the computation of the marginal likelihood function, where the high-dimensional latent states are integrated out and easily sampled in blocks using a smoothing procedure. In addition, extensions of our GED-Gamma model are easily made to incorporate skew heavy-tailed distributions. We use the Bayesian estimator for the inference of static parameters, and perform a simulation study on several properties of the estimator. Our results show that the proposed model can be reasonably estimated. Furthermore, we provide case studies of a Brazilian asset and the pound/dollar exchange rate to show the performance of our approach in terms of fit and prediction. Keywords: SV model, New sequential and smoothing procedures, Generalized Student's t-distribution, Non-Gaussian errors, Heavy tails, Skewness
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.01489&r=ecm
  3. By: Sebastian Calonico; Matias D. Cattaneo; Max H. Farrell; Rocio Titiunik
    Abstract: We study regression discontinuity designs when covariates are included in the estimation. We examine local polynomial estimators that include discrete or continuous covariates in an additive separable way, but without imposing any parametric restrictions on the underlying population regression functions. We recommend a covariate-adjustment approach that retains consistency under intuitive conditions, and characterize the potential for estimation and inference improvements. We also present new covariate-adjusted mean squared error expansions and robust bias-corrected inference procedures, with heteroskedasticity-consistent and cluster-robust standard errors. An empirical illustration and an extensive simulation study is presented. All methods are implemented in \texttt{R} and \texttt{Stata} software packages.
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.03904&r=ecm
  4. By: Nathaniel Beck
    Abstract: This article deals with asimple issue: if we have grouped data with a binary dependent variable and want to include fixed effects (group specific intercepts) in the specification, is Ordinary Least Squares (OLS) in any way superior to a (conditional) logit form? In particular, what are the consequences of using OLS instead of a fixed effects logit model with respect to the latter dropping all units which show no variability in the dependent variable while the former allows for estimation using all units. First, we show that the discussion of fthe incidental parameters problem is based on an assumption about the kinds of data being studied; for what appears to be the common use of fixed effect models in political science the incidental parameters issue is illusory. Turning to linear models, we see that OLS yields a linear combination of the estimates for the units with and without variation in the dependent variable, and so the coefficient estimates must be carefully interpreted. The article then compares two methods of estimating logit models with fixed effects, and shows that the Chamberlain conditional logit is as good as or better than a logit analysis which simply includes group specific intercepts (even though the conditional logit technique was designed to deal with the incidental parameters problem!). Related to this, the article discusses the estimation of marginal effects using both OLS and logit. While it appears that a form of logit with fixed effects can be used to estimate marginal effects, this method can be improved by starting with conditional logit and then using the those parameter estimates to constrain the logit with fixed effects model. This method produces estimates of sample average marginal effects that are at least as good as OLS, and much better when group size is small or the number of groups is large. .
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.06505&r=ecm
  5. By: Ando, Tomohiro; Bai, Jushan
    Abstract: This paper introduces a new procedure for analyzing the quantile co-movement of a large number of financial time series based on a large-scale panel data model with factor structures. The proposed method attempts to capture the unobservable heterogeneity of each of the financial time series based on sensitivity to explanatory variables and to the unobservable factor structure. In our model, the dimension of the common factor structure varies across quantiles, and the factor structure is allowed to be correlated with the explanatory variables. The proposed method allows for both cross-sectional and serial dependence, and heteroskedasticity, which are common in financial markets. We propose new estimation procedures for both frequentist and Bayesian frameworks. Consistency and asymptotic normality of the proposed estimator are established. We also propose a new model selection criterion for determining the number of common factors together with theoretical support. We apply the method to analyze the returns for over 6,000 international stocks from over 60 countries during the subprime crisis, European sovereign debt crisis, and subsequent period. The empirical analysis indicates that the common factor structure varies across quantiles. We find that the common factors for the quantiles and the common factors for the mean are different.
    Keywords: Data-augmentation; Endogeneity; Heterogeneous panel; Quantile factor structure; Serial and cross-sectional correlations.
    JEL: C33 C38
    Date: 2018–06–30
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:88765&r=ecm
  6. By: Whitney Newey; Sami Stouli
    Abstract: Control variables provide an important means of controlling for endogeneity in econometric models with nonseparable and/or multidimensional heterogeneity. We allow for discrete instruments, giving identification results under a variety of restrictions on the way the endogenous variable and the control variables affect the outcome. We consider many structural objects of interest, such as average or quantile treatment effects. We illustrate our results with an empirical application to Engel curve estimation.
    Keywords: Control variables, discrete instruments, structural functions, endogeneity, partially parametric, nonseparable models, identification.
    JEL: C14 C31 C35
    Date: 2018–09–21
    URL: http://d.repec.org/n?u=RePEc:bri:uobdis:18/702&r=ecm
  7. By: Matej Belin
    Abstract: Identification of a coefficient associated with a time-invariant regressor (TIR) often relies on the assumption that the TIR is uncorrelated with the unobserved heterogeneity across panel units. We derive an estimator which avoids the random-effects assumption by employing a proxy for the unobserved heterogeneity thus extending the existing results on proxy variables from the cross-sectional literature. In addition, we quantify the sensitivity of the estimates to potential violations of the random-effects assumption when no proxy is available. The utility of this approach is illustrated on the problem of implausibly high distance elasticity produced by gravity models of international trade.
    Keywords: identification; model specification; omitted variable bias; panel data; variable addition;
    JEL: C01 C18 C33
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:cer:papers:wp624&r=ecm
  8. By: Xi Chen; Victor Chernozhukov; Iv\'an Fern\'andez-Val; Scott Kostyshak; Ye Luo
    Abstract: A common problem in statistics is to estimate and make inference on functions that satisfy shape restrictions. For example, distribution functions are nondecreasing and range between zero and one, height growth charts are nondecreasing in age, and production functions are nondecreasing and quasi-concave in input quantities. We propose a method to enforce these restrictions ex post on point and interval estimates of the target function by applying functional operators. If an operator satisfies certain properties that we make precise, the shape-enforced point estimates are closer to the target function than the original point estimates and the shape-enforced interval estimates have greater coverage and shorter length than the original interval estimates. We show that these properties hold for six different operators that cover commonly used shape restrictions in practice: range, convexity, monotonicity, monotone convexity, quasi-convexity, and monotone quasi-convexity. We illustrate the results with an empirical application to the estimation of a height growth chart for infants in India.
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.01038&r=ecm
  9. By: Whitney Newey; Sami Stouli
    Abstract: Control variables provide an important means of controlling for endogeneity in econometric models with nonseparable and/or multidimensional heterogeneity. We allow for discrete instruments, giving identification results under a variety of restrictions on the way the endogenous variable and the control variables affect the outcome. We consider many structural objects of interest, such as average or quantile treatment effects. We illustrate our results with an empirical application to Engel curve estimation.
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.05706&r=ecm
  10. By: Arthur Lewbel (Boston College)
    Abstract: Over two dozen different terms for identification appear in the econometrics literature, including set identification, causal identification, local identification, generic identification, weak identification, identification at infinity, and many more. This survey: 1. gives a new framework unifying existing definitions of point identification, 2. summarizes and compares the zooful of different terms associated with identification that appear in the literature, and 3. discusses concepts closely related to identification, such as normalizations and the differences in identification between structural models and causal, reduced form models.
    Keywords: Identification, Econometrics, Coherence, Completeness, Randomization, Causal inference, Reduced Form Models, Instrumental Variables, Structural Models, Observational Equivalence, Normalizations, Nonparametrics, Semiparametrics
    JEL: C10 B16
    Date: 2018–09–01
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:957&r=ecm
  11. By: Yun Liu; Yeonwoo Rho
    Abstract: Time averaging has been the traditional approach to handle mixed sampling frequencies. However, it ignores information possibly embedded in high frequency. Mixed data sampling (MIDAS) regression models provide a concise way to utilize the additional information in high-frequency variables. In this paper, we propose a specification test to choose between time averaging and MIDAS models, based on a Durbin-Wu-Hausman test. In particular, a set of instrumental variables is proposed and theoretically validated when the frequency ratio is large. As a result, our method tends to be more powerful than existing methods, as reconfirmed through the simulations.
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.05503&r=ecm
  12. By: Gregor Zens
    Abstract: A method for implicit variable selection in mixture of experts frameworks is proposed. We introduce a prior structure where information is taken from a set of independent covariates. Robust class membership predictors are identified using a normal gamma prior. The resulting model setup is used in a finite mixture of Bernoulli distributions to find homogenous clusters of women in Mozambique based on their information sources on HIV. Fully Bayesian inference is carried out via the implementation of a Gibbs sampler.
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.04853&r=ecm
  13. By: Jon Danielsson; Lerby Ergun; Casper G. de Vries
    Abstract: Worst-case analysis is used among financial regulators in the wake of the recent financial crisis to gauge the tail risk. We provide insight into worst-case analysis and provide guidance on how to estimate it. We derive the bias for the non-parametric heavy-tailed order statistics and contrast it with the semi-parametric extreme value theory (EVT) approach. We find that if the return distribution has a heavy tail, the non-parametric worst-case analysis, i.e. the minimum of the sample, is always downwards biased and hence is overly conservative. Relying on semi-parametric EVT reduces the bias considerably in the case of relatively heavy tails. But for the less-heavy tails this relationship is reversed. Estimates for a large sample of US stock returns indicate that this pattern in the bias is indeed present in financial data. With respect to risk management, this induces an overly conservative capital allocation if the worst case is estimated incorrectly.
    Keywords: Financial stability
    JEL: C01 C14 C58
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:18-47&r=ecm
  14. By: Rossella Calvi (Rice University); Arthur Lewbel (Boston College); Denni Tommasi (ECARES, Université Libre de Bruxelles)
    Abstract: We study the causal effect of women’s empowerment on family health in India. We define treatment as a woman having primary control over household resources and use changes in inheritance laws as an instrument. Due to measurement difficulties and sharing of goods, treatment cannot be directly observed and must be estimated using a structural model. Treatment mismeasurement may therefore arise from model misspecification and estimation errors. We provide a new estimation method, MR-LATE, that can consistently estimate local average treatment effects when treatment is mismeasured. We find that women’s control of substantial household resources improves their and their children’s health.
    Keywords: causality, LATE, structural model, collective model, resource shares, bargaining power, health
    JEL: D13 D11 D12 C31 I32
    Date: 2018–05–30
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:959&r=ecm
  15. By: Arthur T. Rego; Thiago R. dos Santos
    Abstract: In this work, we propose a model for estimating volatility from financial time series, extending the non-Gaussian family of space-state models with exact marginal likelihood proposed by Gamerman, Santos and Franco (2013). On the literature there are models focused on estimating financial assets risk, however, most of them rely on MCMC methods based on Metropolis algorithms, since full conditional posterior distributions are not known. We present an alternative model capable of estimating the volatility, in an automatic way, since all full conditional posterior distributions are known, and it is possible to obtain an exact sample of parameters via Gibbs Sampler. The incorporation of jumps in returns allows the model to capture speculative movements of the data, so that their influence does not propagate to volatility. We evaluate the performance of the algorithm using synthetic and real data time series. Keywords: Financial time series, Stochastic volatility, Gibbs Sampler, Dynamic linear models.
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.01501&r=ecm
  16. By: Michael Zimmert
    Abstract: We contribute to the theoretical literature on difference-in-differences estimation for policy evaluation by allowing the common trend assumption to hold conditional on a high-dimensional covariate set. In particular, the covariates can enter the difference-in-differences model in a very flexible form leading to estimation procedures that involve supervised machine learning methods. We derive asymptotic results for semiparametric and parametric estimators for repeated cross-sections and panel data and show desirable statistical properties. Notably, a non-standard semiparametric efficiency bound for difference-in-differences estimation that incorporates the repeated cross-section case is established. Our proposed semiparametric estimator is shown to attain this bound. The usability of the methods is assessed by replicating a study on an employment protection reform. We demonstrate that the notion of high-dimensional common trend confounding has implications for the economic interpretation of policy evaluation results via difference-in-differences.
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.01643&r=ecm
  17. By: Andrew Goodman-Bacon
    Abstract: The canonical difference-in-differences (DD) model contains two time periods, “pre” and “post”, and two groups, “treatment” and “control”. Most DD applications, however, exploit variation across groups of units that receive treatment at different times. This paper derives an expression for this general DD estimator, and shows that it is a weighted average of all possible two-group/two-period DD estimators in the data. This result provides detailed guidance about how to use regression DD in practice. I define the DD estimand and show how it averages treatment effect heterogeneity and that it is biased when effects change over time. I propose a new balance test derived from a unified definition of common trends. I show how to decompose the difference between two specifications, and I apply it to models that drop untreated units, weight, disaggregate time fixed effects, control for unit-specific time trends, or exploit a third difference.
    JEL: C1 C23
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:25018&r=ecm
  18. By: Natalia Bailey; George Kapetanios; M. Hashem Pesaran
    Abstract: In this paper we focus on estimating the degree of cross-sectional dependence in the error terms of a classical panel data regression model. For this purpose we propose an estimator of the exponent of cross-sectional dependence denoted by α; which is based on the number of non-zero pair-wise cross correlations of these errors. We prove that our estimator, ᾶ; is consistent and derive the rate at which ᾶ approaches its true value. We evaluate the finite sample properties of the proposed estimator by use of a Monte Carlo simulation study. The numerical results are encouraging and supportive of the theoretical findings. Finally, we undertake an empirical investigation of α for the errors of the CAPM model and its Fama-French extensions using 10-year rolling samples from S&P 500 securities over the period Sept 1989 - May 2018.
    Keywords: pair-wise correlations, cross-sectional dependence, cross-sectional averages, weak and strong factor models, CAPM and Fama-French factors
    JEL: C21 C32
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_7223&r=ecm
  19. By: Zhang, Rui; Shonkwiler, J. Scott
    Keywords: Environmental Economics and Policy, Research Methods/Statistical Methods, Resource/Energy Economics and Policy
    Date: 2017–06–15
    URL: http://d.repec.org/n?u=RePEc:ags:aaea17:258099&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.