nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒04‒09
nineteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Spatial GARCH Models By Takaki Sato; Yasumasa Matsuda
  2. Nonparametric Inference in Functional Linear Quantile Regression by RKHS Approach By Kosaku Takanashi
  3. Inference for structural impulse responses in SVAR-GARCH models By Stefan Bruder
  4. Inference on a Distribution from Noisy Draws By Koen Jochmans; Martin Weidner
  5. Network and Panel Quantile Effects Via Distribution Regression By Victor Chernozhukov; Iv\'an Fern\'andez-Val; Martin Weidner
  6. Estimation and forecasting in INAR(p) models using sieve bootstrap By Luisa Bisaglia; Margherita Gerolimetto
  7. Two Distinct Seasonally Fractionally Differenced Periodic Processes By Bensalma, Ahmed
  8. Simultaneous Mean-Variance Regression By Richard H. Spady; Sami Stouli
  9. Simple and Honest Confidence Intervals in Nonparametric Regression By Timothy B. Armstrong; Michal Kolesár
  10. Efficient Discovery of Heterogeneous Treatment Effects in Randomized Experiments via Anomalous Pattern Detection By Edward McFowland III; Sriram Somanchi; Daniel B. Neill
  11. Bootstrap Model Averaging Unit Root Inference By Bruce E. Hansen; Jeffrey S. Racine
  12. Causal Inference for Survival Analysis By Vikas Ramachandra
  13. Double fixed effects estimators with heterogeneous treatment effects By Clement de Chaisemartin; Xavier D'Haultfoeuille
  14. Difference-in-Differences with Multiple Time Periods and an Application on the Minimum Wage and Employment By Brantly Callaway; Pedro H. C. Sant'Anna
  15. Adversarial Generalized Method of Moments By Greg Lewis; Vasilis Syrgkanis
  16. Statistical Non-Significance in Empirical Economics By Alberto Abadie
  17. Jumping VaR: Order Statistics Volatility Estimator for Jumps Classification and Market Risk Modeling By Luca Spadafora; Francesca Sivero; Nicola Picchiotti
  18. Sparse Reduced Rank Regression With Nonconvex Regularization By Ziping Zhao; Daniel P. Palomar
  19. ON SOME PROPERTIES OF A NEW ASYMMETRY-BASED TOBIT MODEL By MÁRIO FERNANDO DE SOUSA; HELTON SAULO; VÍCTOR LEIVA; PAULO SCALCO

  1. By: Takaki Sato; Yasumasa Matsuda
    Abstract: This study proposes a spatial extension of time series generalized autoregressive conditional heteroscedasticity (GARCH) models. We call the spatial extended GARCH models as spatial GARCH (S-GARCH) models. S-GARCH models specify conditional variances given simultaneous observations, which constitutes a good contrast with time series GARCH models that specify conditional variances given past observations. The S-GARCH model are transformed into a spatial autoregressive moving-average (SARMA) model and the parameters of the S-GARCH model are estimated by a two step procedure. First step estimation is the quasi maximum likelihood (QML) estimation method and consistency and asymptotic normality of the proposed QML estimators are given. Second step is estimation of an intercept term by the estimator derived from another QML to avoid bias in first step and consistency of the estimator is shown. We demonstrate empirical properties of the model by simulation studies and real data analyses of land price data in Tokyo areas. We find the estimators have small bias regardless of distributions of error terms from simulation studies and real data analyses show that spatial volatility in land price has global spillover and volatility clustering, namely units with higher spatial volatility are clustered in some specific districts like time series financial data.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:toh:dssraa:78&r=ecm
  2. By: Kosaku Takanashi (Faculty of Economics, Keio University)
    Abstract: This paper studies an asymptotics of functional linear quantile regression in which the dependent variable is scalar while the covariate is a function. We apply a roughness regularization approach of a reproducing kernel Hilbert space framework. In the above circumstance, narrow convergence with respect to uniform convergence fails to hold, because of the strength of its topology. A new approach we propose to the lack-ofuniform- convergence is based on Mosco-convergence that is weaker topology than uniform convergence. By applying narrow convergence with respect to Mosco topology, we develop an infinite-dimensional version of the convexity argument and provide a proof of an asymptotic normality of argmin processes. Our new technique also provides the asymptotic confidence intervals and the generalized likelihood ratio hypothesis testing in fully nonparametric circumstance.
    Keywords: Functional Linear Quantile Regression, Mosco topology, Generalized Likelihood Ratio Test, Estimation with Convex Constraint
    JEL: C14 C12
    Date: 2018–03–04
    URL: http://d.repec.org/n?u=RePEc:keo:dpaper:2018-002&r=ecm
  3. By: Stefan Bruder
    Abstract: Conditional heteroskedasticity can be exploited to identify the structural vector autoregressions (SVAR) but the implications for inference on structural impulse responses have not been investigated in detail yet. We consider the conditionally heteroskedastic SVAR-GARCH model and propose a bootstrap-based inference procedure on structural impulse responses. We compare the finite-sample properties of our bootstrap method with those of two competing bootstrap methods via extensive Monte Carlo simulations. We also present a three-step estimation procedure of the parameters of the SVAR-GARCH model that promises numerical stability even in scenarios with small sample sizes and/or large dimensions.
    Keywords: Bootstrap, conditional heteroskedasticity, multivariate GARCH, structural impulse responses, structural vector autoregression
    JEL: C12 C13 C32
    Date: 2018–04
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:281&r=ecm
  4. By: Koen Jochmans; Martin Weidner
    Abstract: We consider a situation where a distribution is being estimated by the empirical distribution of noisy measurements. The measurements errors are allowed to be heteroskedastic and their variance may depend on the realization of the underlying random variable. We use an asymptotic embedding where the noise shrinks with the sample size to calculate the leading bias arising from the presence of noise. Conditions are obtained under which this bias is asymptotically non-negligible. Analytical and jackknife corrections for the empirical distribution are derived that recenter the limit distribution and yield confidence intervals with correct coverage in large samples. Similar adjustments are presented for nonparametric estimators of the density and quantile function. Our approach can be connected to corrections for selection bias and shrinkage estimation. Simulation results confirm the much improved sampling behavior of the corrected estimators. An empirical application to the estimation of a stochastic-frontier model is also provided.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.04991&r=ecm
  5. By: Victor Chernozhukov; Iv\'an Fern\'andez-Val; Martin Weidner
    Abstract: This paper provides a method to construct simultaneous confidence bands for quantile functions and quantile effects in nonlinear network and panel models with unobserved two-way effects, strictly exogenous covariates, and possibly discrete outcome variables. The method is based upon projection of simultaneous confidence bands for distribution functions constructed from fixed effects distribution regression estimators. These fixed effects estimators are bias corrected to deal with the incidental parameter problem. Under asymptotic sequences where both dimensions of the data set grow at the same rate, the confidence bands for the quantile functions and effects have correct joint coverage in large samples. An empirical application to gravity models of trade illustrates the applicability of the methods to network data.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.08154&r=ecm
  6. By: Luisa Bisaglia (Department of Statistics, University of Padova); Margherita Gerolimetto (Department of Economics, University Of Venice Cà Foscari)
    Abstract: In this paper we analyse some bootstrap techniques to make inference in INAR(p) models. First of all, via Monte Carlo experiments we compare the performances of these methods when estimating the thinning parameters in INAR(p) models. We state the superiority of sieve bootstrap approaches on block bootstrap in terms of low bias and Mean Square Error (MSE). Then we apply the sieve bootstrap methods to obtain coherent predictions and confidence intervals in order to avoid difficulty in deriving the distributional properties.
    Keywords: INAR(p) models, estimation, forecast, bootstrap
    JEL: C22 C53
    URL: http://d.repec.org/n?u=RePEc:ven:wpaper:2018:06&r=ecm
  7. By: Bensalma, Ahmed
    Abstract: This article is devoted to study the e¤ects of the S-periodical fractional di¤erencing filter (1-L^S)^Dt . To put this e¤ect in evidence, we have derived the periodic auto-covariance functions of two distinct univariate seasonally fractionally di¤erenced periodic models. A multivariate representation of periodically correlated process is exploited to provide the exact and approximated expression auto-covariance of each models. The distinction between the models is clearly obvious through the expression of periodic auto-covariance function. Besides producing di¤erent autocovariance functions, the two models di¤er in their implications. In the first model, the seasons of the multivariate series are separately fractionally integrated. In the second model, however, the seasons for the univariate series are fractionally co-integrated. On the simulated sample, for each models, with the same parameters, the empirical periodic autocovariance are calculated and graphically represented for illustrating the results and support the comparison between the two models.
    Keywords: Periodically correlated process, Fraction integration, seasonal fractional integration, Periodic fractional integration
    JEL: C1 C15 C2 C22 C5 C51 C52 C6
    Date: 2018–03–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:84969&r=ecm
  8. By: Richard H. Spady; Sami Stouli
    Abstract: We propose simultaneous mean-variance regression for the linear estimation and approximation of conditional mean functions. In the presence of heteroskedasticity of unknown form, our method accounts for varying dispersion in the regression outcome across the support of conditioning variables by using weights that are jointly determined with mean regression parameters. Simultaneity generates outcome predictions that are guaranteed to improve over ordinary least-squares prediction error, with corresponding parameter standard errors that are automatically valid. Under shape misspecification of the conditional mean and variance functions, we establish existence and uniqueness of the resulting approximations and characterize their formal interpretation. We illustrate our method with numerical simulations and two empirical applications to the estimation of the relationship between economic prosperity in 1500 and today, and demand for gasoline in the United States.
    Keywords: Conditional mean and variance functions, linear regression, simultaneous approximation, heteroskedasticity, robust inference, misspecification, influence function, convexity, ordinary least-squares, dual regression.
    Date: 2018–04–05
    URL: http://d.repec.org/n?u=RePEc:bri:uobdis:18/697&r=ecm
  9. By: Timothy B. Armstrong (Cowles Foundation, Yale University); Michal Kolesár (Princeton University)
    Abstract: We consider the problem of constructing honest con?dence intervals (CIs) for a scalar parameter of interest, such as the regression discontinuity parameter, in nonparametric regression based on kernel or local polynomial estimators. To ensure that our CIs are honest, we derive and tabulate novel critical values that take into account the possible bias of the estimator upon which the CIs are based. We show that this approach leads to CIs that are more e?icient than conventional CIs that achieve coverage by undersmoothing or subtracting an estimate of the bias. We give sharp e?iciency bounds of using di?erent kernels, and derive the optimal bandwidth for constructing honest CIs. We show that using the bandwidth that minimizes the maximum mean-squared error results in CIs that are nearly e?icient and that in this case, the critical value depends only on the rate of convergence. For the common case in which the rate of convergence is n^{-2/5}, the appropriate critical value for 95% CIs is 2.18, rather than the usual 1.96 critical value. We illustrate our results in a Monte Carlo analysis and an empirical application.
    Keywords: Nonparametric inference, relative efficiency
    JEL: C12 C14
    Date: 2016–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2044r2&r=ecm
  10. By: Edward McFowland III; Sriram Somanchi; Daniel B. Neill
    Abstract: The randomized experiment is an important tool for inferring the causal impact of an intervention. The recent literature on statistical learning methods for heterogeneous treatment effects demonstrates the utility of estimating the marginal conditional average treatment effect (MCATE), i.e., the average treatment effect for a subpopulation of respondents who share a particular subset of covariates. However, each proposed method makes its own set of restrictive assumptions about the intervention's effects, the underlying data generating processes, and which subpopulations (MCATEs) to explicitly estimate. Moreover, the majority of the literature provides no mechanism to identify which subpopulations are the most affected--beyond manual inspection--and provides little guarantee on the correctness of the identified subpopulations. Therefore, we propose Treatment Effect Subset Scan (TESS), a new method for discovering which subpopulation in a randomized experiment is most significantly affected by a treatment. We frame this challenge as a pattern detection problem where we maximize a nonparametric scan statistic (measurement of distributional divergence) over subpopulations, while being parsimonious in which specific subpopulations to evaluate. Furthermore, we identify the subpopulation which experiences the largest distributional change as a result of the intervention, while making minimal assumptions about the intervention's effects or the underlying data generating process. In addition to the algorithm, we demonstrate that the asymptotic Type I and II error can be controlled, and provide sufficient conditions for detection consistency---i.e., exact identification of the affected subpopulation. Finally, we validate the efficacy of the method by discovering heterogeneous treatment effects in simulations and in real-world data from a well-known program evaluation study.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.09159&r=ecm
  11. By: Bruce E. Hansen; Jeffrey S. Racine
    Abstract: Classical unit root tests are known to suffer from potentially crippling size distortions, and a range of procedures have been proposed to attenuate this problem, including the use of bootstrap procedures. It is also known that the estimating equation’s functional form can affect the outcome of the test, and various model selection procedures have been proposed to overcome this limitation. In this paper, we adopt a model averaging procedure to deal with model uncertainty at the testing stage. In addition, we leverage an automatic model-free dependent bootstrap procedure where the null is imposed by simple differencing (the block length is automatically determined using recent developments for bootstrapping dependent processes). Monte Carlo simulations indicate that this approach exhibits the lowest size distortions among its peers in settings that confound existing approaches, while it has superior power relative to those peers whose size distortions do not preclude their general use. The proposed approach is fully automatic, and there are no nuisance parameters that have to be set by the user, which ought to appeal to practitioners.
    Keywords: inference, model selection, size distortion, time series.
    Date: 2018–04
    URL: http://d.repec.org/n?u=RePEc:mcm:deptwp:2018-09&r=ecm
  12. By: Vikas Ramachandra
    Abstract: In this paper, we propose the use of causal inference techniques for survival function estimation and prediction for subgroups of the data, upto individual units. Tree ensemble methods, specifically random forests were modified for this purpose. A real world healthcare dataset was used with about 1800 patients with breast cancer, which has multiple patient covariates as well as disease free survival days (DFS) and a death event binary indicator (y). We use the type of cancer curative intervention as the treatment variable (T=0 or 1, binary treatment case in our example). The algorithm is a 2 step approach. In step 1, we estimate heterogeneous treatment effects using a causalTree with the DFS as the dependent variable. Next, in step 2, for each selected leaf of the causalTree with distinctly different average treatment effect (with respect to survival), we fit a survival forest to all the patients in that leaf, one forest each for treatment T=0 as well as T=1 to get estimated patient level survival curves for each treatment (more generally, any model can be used at this step). Then, we subtract the patient level survival curves to get the differential survival curve for a given patient, to compare the survival function as a result of the 2 treatments. The path to a selected leaf also gives us the combination of patient features and their values which are causally important for the treatment effect difference at the leaf.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.08218&r=ecm
  13. By: Clement de Chaisemartin; Xavier D'Haultfoeuille
    Abstract: Around 20% of all empirical papers published by the American Economic Review between 2010 and 2012 estimate treatment effects using linear regressions with time and group fixed effects. In a model where the effect of the treatment is constant across groups and over time, such regressions identify the treatment effect of interest under the standard "common trends" assumption. But these regressions have not been analyzed yet allowing for treatment effect heterogeneity. We show that under two alternative sets of assumptions, such regressions identify weighted sums of average treatment effects in each group and period, where some weights may be negative. The weights can be estimated, and can help researchers assess whether their results are robust to heterogeneous treatment effects across groups and periods. When many weights are negative, their estimates may not even have the same sign as the true average treatment effect if treatment effects are heterogenous. We also propose another estimator of the treatment effect that does not rely on any homogeneity assumption. Finally, we estimate the weights in two applications and find that in both cases, around half of the average treatment effects receive a negative weight.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.08807&r=ecm
  14. By: Brantly Callaway (Department of Economics, Temple University); Pedro H. C. Sant'Anna (Department of Economics, Vanderbilt University)
    Abstract: Difference-in-Differences (DID) is one of the most important and popular designs for eval- uating causal effects of policy changes. In its standard format, there are two time periods and two groups: in the first period no one is treated, and in the second period a “treatment group†becomes treated, whereas a “control group†remains untreated. However, many em- pirical applications of the DID design have more than two periods and variation in treatment timing. In this article, we consider identification and estimation of treatment effect param- eters using DID with (i) multiple time periods, (ii) variation in treatment timing, and (iii) when the “parallel trends assumption†holds potentially only after conditioning on observed covariates. We propose a simple two-step estimation strategy, establish the asymptotic prop- erties of the proposed estimators, and prove the validity of a computationally convenient bootstrap procedure. Furthermore we propose a semiparametric data-driven testing proce- dure to assess the credibility of the DID design in our context. Finally, we analyze the effect of the minimum wage on teen employment from 2001-2007. By using our proposed methods we confront the challenges related to variation in the timing of the state-level minimum wage policy changes. Open-source software is available for implementing the proposed methods.
    Keywords: Difference-in-Differences, Multiple Periods, Variation in Treatment Timing, Pre- Testing, Minimum Wage
    JEL: C14 C21 C23 J23 J38
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:tem:wpaper:1804&r=ecm
  15. By: Greg Lewis; Vasilis Syrgkanis
    Abstract: We provide an approach for learning deep neural net representations of models described via conditional moment restrictions. Conditional moment restrictions are widely used, as they are the language by which social scientists describe the assumptions they make to enable causal inference. We formulate the problem of estimating the underling model as a zero-sum game between a modeler and an adversary and apply adversarial training. Our approach is similar in nature to Generative Adversarial Networks (GAN), though here the modeler is learning a representation of a function that satisfies a continuum of moment conditions and the adversary is identifying violating moments. We outline ways of constructing effective adversaries in practice, including kernels centered by k-means clustering, and random forests. We examine the practical performance of our approach in the setting of non-parametric instrumental variable regression.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.07164&r=ecm
  16. By: Alberto Abadie
    Abstract: Significance tests are probably the most common form of inference in empirical economics, and significance is often interpreted as providing greater informational content than non-significance. In this article we show, however, that rejection of a point null often carries very little information, while failure to reject may be highly informative. This is particularly true in empirical contexts that are typical and even prevalent in economics, where data sets are large (and becoming larger) and where there are rarely reasons to put substantial prior probability on a point null. Our results challenge the usual practice of conferring point null rejections a higher level of scientific significance than non-rejections. In consequence, we advocate a visible reporting and discussion of non-significant results in empirical practice.
    JEL: C01 C12
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:24403&r=ecm
  17. By: Luca Spadafora; Francesca Sivero; Nicola Picchiotti
    Abstract: This paper proposes a new integrated variance estimator based on order statistics within the framework of jump-diffusion models. Its ability to disentangle the integrated variance from the total process quadratic variation is confirmed by both simulated and empirical tests. For practical purposes, we introduce an iterative algorithm to estimate the time-varying volatility and the occurred jumps of log-return time series. Such estimates enable the definition of a new market risk model for the Value at Risk forecasting. We show empirically that this procedure outperforms the standard historical simulation method applying standard back-testing approach.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.07021&r=ecm
  18. By: Ziping Zhao; Daniel P. Palomar
    Abstract: In this paper, the estimation problem for sparse reduced rank regression (SRRR) model is considered. The SRRR model is widely used for dimension reduction and variable selection with applications in signal processing, econometrics, etc. The problem is formulated to minimize the least squares loss with a sparsity-inducing penalty considering an orthogonality constraint. Convex sparsity-inducing functions have been used for SRRR in literature. In this work, a nonconvex function is proposed for better sparsity inducing. An efficient algorithm is developed based on the alternating minimization (or projection) method to solve the nonconvex optimization problem. Numerical simulations show that the proposed algorithm is much more efficient compared to the benchmark methods and the nonconvex function can result in a better estimation accuracy.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.07247&r=ecm
  19. By: MÁRIO FERNANDO DE SOUSA; HELTON SAULO; VÍCTOR LEIVA; PAULO SCALCO
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:anp:en2016:129&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.