nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒02‒01
thirty-two papers chosen by
Sune Karlsson
Örebro universitet

  1. Density estimation using bootstrap quantile variance and quantile-mean covariance By Gabriel Montes Rojas; Andrés Sebastián Mena
  2. Asymptotic Properties of Least Squares Estimator in Local to Unity Processes with Fractional Gaussian Noises By Wang, Xiaohu; Xiao, Weilin; Yu, Jun
  3. Quantile regression with generated dependent variable and covariates By Jayeeta Bhattacharya
  4. Three questions regarding impulse responses and their interpretation found from sign restrictions By Sam Ouliaris; Adrian Pagan
  5. Achieving Reliable Causal Inference with Data-Mined Variables: A Random Forest Approach to the Measurement Error Problem By Mochen Yang; Edward McFowland III; Gordon Burtch; Gediminas Adomavicius
  6. Long-term prediction intervals with many covariates By Sayar Karmakar; Marek Chudy; Wei Biao Wu
  7. Weak Identification with Bounds in a Class of Minimum Distance Models By Gregory Cox
  8. Bias-Aware Inference in Regularized Regression Models By Timothy B. Armstrong; Michal Koles\'ar; Soonwoo Kwon
  9. Subgraph Network Random Effects Error Components Models: Specification and Testing By Gabriel Montes Rojas
  10. Measure Transportation and Statistical Decision Theory By Marc Hallin
  11. Bayesian Fuzzy Clustering with Robust Weighted Distance for Multiple ARIMA and Multivariate Time-Series By Pacifico, Antonio
  12. Assessing the Sensitivity of Synthetic Control Treatment Effect Estimates to Misspecification Error By Billy Ferguson; Brad Ross
  13. Identification of inferential parameters in the covariate-normalized linear conditional logit model By Philip Erickson
  14. Weighting-Based Treatment Effect Estimation via Distribution Learning By Dongcheng Zhang; Kunpeng Zhang
  15. Time-Transformed Test for the Explosive Bubbles under Non-stationary Volatility By Eiji Kurozumi; Anton Skrobotov; Alexey Tsarev
  16. Adversarial Estimation of Riesz Representers By Victor Chernozhukov; Whitney Newey; Rahul Singh; Vasilis Syrgkanis
  17. Minimax Risk and Uniform Convergence Rates for Nonparametric Dyadic Regression By Bryan S. Graham; Fengshi Niu; James L. Powell
  18. A closed-form estimator for quantile treatment effects with endogeneity By Wuthrich, Kaspar
  19. Discordant Relaxations of Misspecified Models By D\'esir\'e K\'edagni; Lixiong Li; Isma\"el Mourifi\'e
  20. Measuring Uncertainty of a Combined Forecast and Some Tests for Forecaster Heterogeneity By Kajal Lahiri; Huaming Peng; Xuguang Sheng
  21. Integrated nested Laplace approximations for threshold stochastic volatility models By Lopes Moreira Da Veiga, María Helena; Rue, Havard; Marín Díazaraque, Juan Miguel; Zea Bermudez, P. De
  22. Bayesian analysis of seasonally cointegrated VAR model By Justyna Wr\'oblewska
  23. Non-Manipulable Machine Learning: The Incentive Compatibility of Lasso By Mehmet Caner; Kfir Eliaz
  24. Structural Estimation of Time-Varying Spillovers: An Application to International Credit Risk Transmission By Boeckelmann Lukas; Stalla-Bourdillon Arthur
  25. An expectation-maximization algorithm for the exponential-generalized inverse Gaussian regression model with varying dispersion and shape for modelling the aggregate claim amount By Tzougas, George; Jeong, Himchan
  26. Granular Instrumental Variables By Xavier Gabaix; Ralph S. J. Koijen
  27. Analysis of Randomized Experiments with Network Interference and Noncompliance By Bora Kim
  28. Assessing Sensitivity to Unconfoundedness: Estimation and Inference By Matthew A. Masten; Alexandre Poirier; Linqi Zhang
  29. Statistical decision functions with judgment By Manganelli, Simone
  30. Exact Trend Control in Estimating Treatment Effects Using Panel Data with Heterogenous Trends By Chirok Han
  31. Almost Similar Tests for Mediation Effects and other Hypotheses with Singularities By Kees Jan van Garderen; Noud van Giersbergen
  32. Algorithms for Learning Graphs in Financial Markets By Jos\'e Vin\'icius de Miranda Cardoso; Jiaxi Ying; Daniel Perez Palomar

  1. By: Gabriel Montes Rojas (Instituto Interdisciplinario de Economía Política de Buenos Aires - UBA - CONICET); Andrés Sebastián Mena (Instituto Superior de Estudios Sociales - CONICET)
    Abstract: We propose two novel bootstrap density estimators based on the quantile variance and the quantile-mean covariance. We review previous developments on quantile-density estimation and asymptotic results in the literature that can be applied to this case. We conduct Monte Carlo simulations for dierent data generating processes, sample sizes, and parameters. The estimators perform well in comparison to benchmark nonparametric kernel density estimator. Some of the explored smoothing techniques present lower bias and mean integrated squared errors, which indicates that the proposed estimator is a promising strategy.
    Keywords: Density Estimation, Quantile Variance, Quantile-Mean Covariance, Bootstrap
    JEL: C13 C14 C15 C46
  2. By: Wang, Xiaohu (Fudan University); Xiao, Weilin (Zhejiang University); Yu, Jun (School of Economics, Singapore Management University)
    Abstract: This paper derives asymptotic properties of the least squares estimator of the autoregressive parameter in local to unity processes with errors being fractional Gaussian noises with the Hurst parameter H. It is shown that the estimator is consistent when H ∈ (0, 1). Moreover, the rate of convergence is n when H ∈ [0.5, 1). The rate of convergence is n2H when H ∈ (0, 0.5). Furthermore, the limit distribution of the centered least squares estimator depends on H. When H = 0.5, the limit distribution is the same as that obtained in Phillips (1987a) for the local to unity model with errors for which the standard functional central theorem is applicable. When H > 0.5 or when H
    Keywords: Least squares; Local to unity; Fractional Brownian motion; Fractional Ornstein-Uhlenbeck process
    JEL: C22
    Date: 2020–12–23
  3. By: Jayeeta Bhattacharya
    Abstract: We study linear quantile regression models when regressors and/or dependent variable are not directly observed but estimated in an initial first step and used in the second step quantile regression for estimating the quantile parameters. This general class of generated quantile regression (GQR) covers various statistical applications, for instance, estimation of endogenous quantile regression models and triangular structural equation models, and some new relevant applications are discussed. We study the asymptotic distribution of the two-step estimator, which is challenging because of the presence of generated covariates and/or dependent variable in the non-smooth quantile regression estimator. We employ techniques from empirical process theory to find uniform Bahadur expansion for the two step estimator, which is used to establish the asymptotic results. We illustrate the performance of the GQR estimator through simulations and an empirical application based on auctions.
    Date: 2020–12
  4. By: Sam Ouliaris; Adrian Pagan
    Abstract: When sign restrictions are used in SVARs impulse responses are only set identified. If sign restrictions are just given for a single shock the shocks may not be separated, and so the resulting structural equations can be unacceptable. Thus, in a supply demand model, if only signs are given for the impulse responses to a demand shock this may result in two supply curves being in the SVAR. One needs to find the identified set so that this effect is excluded. Granziera el al’s (2018) frequentist approach to inference potentially suffers from this issue. One also has to recognize that the identified set should be adjusted so that it produces responses to the same size shock. Finally, because researchers are often unwilling to set out sign restrictions to separate all shocks, we describe how this can be done with a SVAR/VAR system rather than a straight SVAR.
    Keywords: SVAR, Sign Restrictions, Identified Set
    JEL: E37 C51 C52
    Date: 2020–11
  5. By: Mochen Yang; Edward McFowland III; Gordon Burtch; Gediminas Adomavicius
    Abstract: Combining machine learning with econometric analysis is becoming increasingly prevalent in both research and practice. A common empirical strategy involves the application of predictive modeling techniques to 'mine' variables of interest from available data, followed by the inclusion of those variables into an econometric framework, with the objective of estimating causal effects. Recent work highlights that, because the predictions from machine learning models are inevitably imperfect, econometric analyses based on the predicted variables are likely to suffer from bias due to measurement error. We propose a novel approach to mitigate these biases, leveraging the ensemble learning technique known as the random forest. We propose employing random forest not just for prediction, but also for generating instrumental variables to address the measurement error embedded in the prediction. The random forest algorithm performs best when comprised of a set of trees that are individually accurate in their predictions, yet which also make 'different' mistakes, i.e., have weakly correlated prediction errors. A key observation is that these properties are closely related to the relevance and exclusion requirements of valid instrumental variables. We design a data-driven procedure to select tuples of individual trees from a random forest, in which one tree serves as the endogenous covariate and the other trees serve as its instruments. Simulation experiments demonstrate the efficacy of the proposed approach in mitigating estimation biases and its superior performance over three alternative methods for bias correction.
    Date: 2020–12
  6. By: Sayar Karmakar; Marek Chudy; Wei Biao Wu
    Abstract: Accurate forecasting is one of the fundamental focus in the literature of econometric time-series. Often practitioners and policy makers want to predict outcomes of an entire time horizon in the future instead of just a single $k$-step ahead prediction. These series, apart from their own possible non-linear dependence, are often also influenced by many external predictors. In this paper, we construct prediction intervals of time-aggregated forecasts in a high-dimensional regression setting. Our approach is based on quantiles of residuals obtained by the popular LASSO routine. We allow for general heavy-tailed, long-memory, and nonlinear stationary error process and stochastic predictors. Through a series of systematically arranged consistency results we provide theoretical guarantees of our proposed quantile-based method in all of these scenarios. After validating our approach using simulations we also propose a novel bootstrap based method that can boost the coverage of the theoretical intervals. Finally analyzing the EPEX Spot data, we construct prediction intervals for hourly electricity prices over horizons spanning 17 weeks and contrast them to selected Bayesian and bootstrap interval forecasts.
    Date: 2020–12
  7. By: Gregory Cox
    Abstract: When parameters are weakly identified, bounds on the parameters may provide a valuable source of information. Existing weak identification estimation and inference results are unable to combine weak identification with bounds. Within a class of minimum distance models, this paper proposes identification-robust inference that incorporates information from bounds when parameters are weakly identified. The inference is based on limit theory that combines weak identification theory (Andrews and Cheng (2012)) with parameter-on-the-boundary theory (Andrews (1999)) via a new argmax theorem. This paper characterizes weak identification in low-dimensional factor models (due to weak factors) and demonstrates the role of the bounds and identification-robust inference in two example factor models. This paper also demonstrates the identification-robust inference in an empirical application: estimating the effects of a randomized intervention on parental investments in children, where parental investments are modeled by a factor model.
    Date: 2020–12
  8. By: Timothy B. Armstrong; Michal Koles\'ar; Soonwoo Kwon
    Abstract: We consider inference on a regression coefficient under a constraint on the magnitude of the control coefficients. We show that a class of estimators based on an auxiliary regularized regression of the regressor of interest on control variables exactly solves a tradeoff between worst-case bias and variance. We derive "bias-aware" confidence intervals (CIs) based on these estimators, which take into account possible bias when forming the critical value. We show that these estimators and CIs are near-optimal in finite samples for mean squared error and CI length. Our finite-sample results are based on an idealized setting with normal regression errors with known homoskedastic variance, and we provide conditions for asymptotic validity with unknown and possibly heteroskedastic error distribution. Focusing on the case where the constraint on the magnitude of control coefficients is based on an $\ell_p$ norm ($p\ge 1$), we derive rates of convergence for optimal estimators and CIs under high-dimensional asymptotics that allow the number of regressors to increase more quickly than the number of observations.
    Date: 2020–12
  9. By: Gabriel Montes Rojas (Instituto Interdisciplinario de Economía Política de Buenos Aires - UBA - CONICET)
    Abstract: This paper develops a subgraph network random effects error components for network data regression models. In particular, it allows for edge and triangle specific components, which serve as a basal model for modeling network effects. It then evaluates the potential effects of ignoring network effects in the estimation of the variance-covariance matrix. It also proposes consistent estimator of the variance components and Lagrange Multiplier tests for evaluating the appropriate model of random components in networks. Monte Carlo simulations show that the tests have good performance in finite samples. It applies the proposed tests to the Call interbank market in Argentina.
    Keywords: Networks, Clusters, Moulton Factor
    JEL: C2 C12
  10. By: Marc Hallin
    Abstract: Unlike the real line, the real space, in dimension $d\geq 2$, is not canonically ordered. As a consequence, extending to a multivariate context fundamental univariate statistical tools such as quantiles, signs, and ranks is anything but obvious. Tentative definitions have been proposed in the literature but do not enjoy the basic properties (e.g. distribution-freeness of ranks, their independence with respect to the order statistic, their independence with respect to signs, etc.) they are expected to satisfy. Based on measure transportation ideas, new concepts of distribution and quantile functions, ranks, and signs have been proposed recently that, unlike previous attempts, do satisfy these properties. These ranks, signs, and quantiles have been used, quite successfully, in several inference problems and have triggered, in a short span of time, a number of applications: fully distribution-free testing for multiple-output regression, MANOVA, and VAR models, R-estimation for VARMA parameters, distribution-free testing for vector independence, multiple-output quantile regression, nonlinear independent component analysis, etc.
    Keywords: Measure transportation; statistical decision theory
    JEL: C44
    Date: 2021–01
  11. By: Pacifico, Antonio
    Abstract: The paper suggests and develops a computational approach to improve hierarchical fuzzy clustering time-series analysis when accounting for high dimensional and noise problems in dynamic data. A Robust Weighted Distance measure between pairs of sets of Auto-Regressive Integrated Moving Average models is used. It is robust because Bayesian Model Selection methodology is performed with a set of conjugate informative priors in order to discover the most probable set of clusters capturing different dynamics and interconnections among time-varying data, and weighted because each time-series is 'adjusted' by own Posterior Model Size distribution in order to group dynamic data objects into 'ad hoc' homogenous clusters. Monte Carlo methods are used to compute exact posterior probabilities for each cluster chosen and thus avoid the problem of increasing the overall probability of errors that plagues classical statistical methods based on significance tests. Empirical and simulated examples describe the functioning and the performance of the procedure. Discussions with related works and possible extensions of the methodology to jointly deal with endogeneity issues and misspecified dynamics in high dimensional multicountry setups are also displayed.
    Keywords: Distance Measures; Fuzzy Clustering; ARIMA Time-Series; Bayesian Model Selection; MCMC Integrations.
    JEL: C1 C52 C61
    Date: 2020
  12. By: Billy Ferguson; Brad Ross
    Abstract: We propose a sensitivity analysis for Synthetic Control (SC) treatment effect estimates to interrogate the assumption that the SC method is well-specified, namely that choosing weights to minimize pre-treatment prediction error yields accurate predictions of counterfactual post-treatment outcomes. Our data-driven procedure recovers the set of treatment effects consistent with the assumption that the misspecification error incurred by the SC method is at most the observable misspecification error incurred when using the SC estimator to predict the outcomes of some control unit. We show that under one definition of misspecification error, our procedure provides a simple, geometric motivation for comparing the estimated treatment effect to the distribution of placebo residuals to assess estimate credibility. When applied to several canonical studies that use the SC method, our procedure demonstrates that the signs of most of those results are relatively robust.
    Date: 2020–12
  13. By: Philip Erickson
    Abstract: The conditional logit model is a standard workhorse approach to estimating customers' product feature preferences using choice data. Using these models at scale, however, can result in numerical imprecision and optimization failure due to a combination of large-valued covariates and the softmax probability function. Standard machine learning approaches alleviate these concerns by applying a normalization scheme to the matrix of covariates, scaling all values to sit within some interval (such as the unit simplex). While this type of normalization is innocuous when using models for prediction, it has the side effect of perturbing the estimated coefficients, which are necessary for researchers interested in inference. This paper shows that, for two common classes of normalizers, designated scaling and centered scaling, the data-generating non-scaled model parameters can be analytically recovered along with their asymptotic distributions. The paper also shows the numerical performance of the analytical results using an example of a scaling normalizer.
    Date: 2020–12
  14. By: Dongcheng Zhang; Kunpeng Zhang
    Abstract: Existing weighting methods for treatment effect estimation are often built upon the idea of propensity scores or covariate balance. They usually impose strong assumptions on treatment assignment or outcome model to obtain unbiased estimation, such as linearity or specific functional forms, which easily leads to the major drawback of model mis-specification. In this paper, we aim to alleviate these issues by developing a distribution learning-based weighting method. We first learn the true underlying distribution of covariates conditioned on treatment assignment, then leverage the ratio of covariates' density in the treatment group to that of the control group as the weight for estimating treatment effects. Specifically, we propose to approximate the distribution of covariates in both treatment and control groups through invertible transformations via change of variables. To demonstrate the superiority, robustness, and generalizability of our method, we conduct extensive experiments using synthetic and real data. From the experiment results, we find that our method for estimating average treatment effect on treated (ATT) with observational data outperforms several cutting-edge weighting-only benchmarking methods, and it maintains its advantage under a doubly-robust estimation framework that combines weighting with some advanced outcome modeling methods.
    Date: 2020–12
  15. By: Eiji Kurozumi; Anton Skrobotov; Alexey Tsarev
    Abstract: This paper is devoted to testing for the explosive bubble under time-varying non-stationary volatility. Because the limiting distribution of the seminal Phillips et al. (2011) test depends on the variance function and usually requires a bootstrap implementation under heteroskedasticity, we construct the test based on a deformation of the time domain. The proposed test is asymptotically pivotal under the null hypothesis and its limiting distribution coincides with that of the standard test under homoskedasticity, so that the test does not require computationally extensive methods for inference. Appealing finite sample properties are demonstrated through Monte-Carlo simulations. An empirical application demonstrates that the upsurge behavior of cryptocurrency time series in the middle of the sample is partially explained by the volatility change.
    Date: 2020–12
  16. By: Victor Chernozhukov; Whitney Newey; Rahul Singh; Vasilis Syrgkanis
    Abstract: We provide an adversarial approach to estimating Riesz representers of linear functionals within arbitrary function spaces. We prove oracle inequalities based on the localized Rademacher complexity of the function space used to approximate the Riesz representer and the approximation error. These inequalities imply fast finite sample mean-squared-error rates for many function spaces of interest, such as high-dimensional sparse linear functions, neural networks and reproducing kernel Hilbert spaces. Our approach offers a new way of estimating Riesz representers with a plethora of recently introduced machine learning techniques. We show how our estimator can be used in the context of de-biasing structural/causal parameters in semi-parametric models, for automated orthogonalization of moment equations and for estimating the stochastic discount factor in the context of asset pricing.
    Date: 2020–12
  17. By: Bryan S. Graham; Fengshi Niu; James L. Powell
    Abstract: Let $i=1,\ldots,N$ index a simple random sample of units drawn from some large population. For each unit we observe the vector of regressors $X_{i}$ and, for each of the $N\left(N-1\right)$ ordered pairs of units, an outcome $Y_{ij}$. The outcomes $Y_{ij}$ and $Y_{kl}$ are independent if their indices are disjoint, but dependent otherwise (i.e., "dyadically dependent"). Let $W_{ij}=\left(X_{i}',X_{j}'\right)'$; using the sampled data we seek to construct a nonparametric estimate of the mean regression function $g\left(W_{ij}\right)\overset{def}{\equiv}\mathbb{E}\left[\left.Y_{ij}\right|X_{i},X_{j}\right].$ We present two sets of results. First, we calculate lower bounds on the minimax risk for estimating the regression function at (i) a point and (ii) under the infinity norm. Second, we calculate (i) pointwise and (ii) uniform convergence rates for the dyadic analog of the familiar Nadaraya-Watson (NW) kernel regression estimator. We show that the NW kernel regression estimator achieves the optimal rates suggested by our risk bounds when an appropriate bandwidth sequence is chosen. This optimal rate differs from the one available under iid data: the effective sample size is smaller and $d_W=\mathrm{dim}(W_{ij})$ influences the rate differently.
    Date: 2020–12
  18. By: Wuthrich, Kaspar
    Keywords: Instrumental variables, Conditional and unconditional quantile, treatment effects, Distribution regression, Exchangeable bootstrap, Econometrics, Statistics, Applied Economics
    Date: 2019–06–01
  19. By: D\'esir\'e K\'edagni; Lixiong Li; Isma\"el Mourifi\'e
    Abstract: In many set identified models, it is difficult to obtain a tractable characterization of the identified set, therefore, empirical works often construct confidence region based on an outer set of the identified set. Because an outer set is always a superset of the identified set, this practice is often viewed as conservative yet valid. However, this paper shows that, when the model is refuted by the data, an nonempty outer set could deliver conflicting results with another outer set derived from the same underlying model structure, so that the results of outer sets could be misleading in the presence of misspecification. We provide a sufficient condition for the existence of discordant outer sets which covers models characterized by intersection bounds and the Artstein(1983) inequalities. Furthermore, we develop a method to salvage misspecified models. We consider all minimum relaxations of a refuted model which restore data-consistency. We find that the union of the identified sets of these minimum relaxations is misspecification-robust and it has a new and intuitive empirical interpretation. Although this paper primarily focuses on discrete relaxations, our new interpretation also applies to continuous relaxations.
    Date: 2020–12
  20. By: Kajal Lahiri; Huaming Peng; Xuguang Sheng
    Abstract: We have argued that from the standpoint of a policy maker who has access to a number of expert forecasts, the uncertainty of a combined forecast should be interpreted as that of a typical forecaster randomly drawn from the pool. With a standard factor decomposition of a panel of forecasts, we show that the uncertainty of a typical forecaster can be expressed as the disagreement among the forecasters plus the volatility of the common shock. Using new statistics to test for the homogeneity of idiosyncratic errors under the joint limits with both T and n approaching infinity simultaneously, we find that some previously used measures significantly underestimate the conceptually correct benchmark forecast uncertainty.
    Keywords: disagreement, forecast combination, panel data, uncertainty
    JEL: C12 C33 E37
    Date: 2020
  21. By: Lopes Moreira Da Veiga, María Helena; Rue, Havard; Marín Díazaraque, Juan Miguel; Zea Bermudez, P. De
    Abstract: The aim of the paper is to implement the integrated nested Laplace (INLA) approximations,known to be very fast and efficient, for a threshold stochastic volatility model. INLAreplaces MCMC simulations with accurate deterministic approximations. We use properal though not very informative priors and Penalizing Complexity (PC) priors. The simulation results favor the use of PC priors, specially when the sample size varies from small to moderate. For these sample sizes, they provide more accurate estimates of the model'sparameters, but as sample size increases both type of priors lead to reliable estimates of the parameters. We also validate the estimation method in-sample and out-of-sample by applying it to six series of returns including stock market, commodity and crypto currency returns and by forecasting their one-day-ahead volatilities, respectively. Our empirical results support that the TSV model does a good job in forecasting the one-day-ahead volatility of stock market and gold returns but faces difficulties when the volatility of returns is extreme, which occurs in the case of cryptocurrencies.
    Keywords: Threshold Stochastic Volatility Model; Pc Priors; Inla
    JEL: C58 C52 C32 C13
    Date: 2021–01–27
  22. By: Justyna Wr\'oblewska
    Abstract: The paper aims at developing the Bayesian seasonally cointegrated model for quarterly data. We propose the prior structure, derive the set of full conditional posterior distributions, and propose the sampling scheme. The identification of cointegrating spaces is obtained \emph{via} orthonormality restrictions imposed on vectors spanning them. In the case of annual frequency, the cointegrating vectors are complex, which should be taken into account when identifying them. The point estimation of the cointegrating spaces is also discussed. The presented methods are illustrated by a simulation experiment and are employed in the analysis of money and prices in the Polish economy.
    Date: 2020–12
  23. By: Mehmet Caner; Kfir Eliaz
    Abstract: We consider situations where a user feeds her attributes to a machine learning method that tries to predict her best option based on a random sample of other users. The predictor is incentive-compatible if the user has no incentive to misreport her covariates. Focusing on the popular Lasso estimation technique, we borrow tools from high-dimensional statistics to characterize sufficient conditions that ensure that Lasso is incentive compatible in large samples. In particular, we show that incentive compatibility is achieved if the tuning parameter is kept above some threshold. We present simulations that illustrate how this can be done in practice.
    Date: 2021–01
  24. By: Boeckelmann Lukas; Stalla-Bourdillon Arthur
    Abstract: We propose a novel approach to quantify spillovers on financial markets based on a structural version of the Diebold-Yilmaz framework. Key to our approach is a SVAR-GARCH model that is statistically identified by heteroskedasticity, economically identified by maximum shock contribution and that allows for time-varying forecast error variance decompositions. We analyze credit risk spillovers between EZ sovereign and bank CDS. Methodologically, we find the model to better match economic narratives compared with common spillover approaches and to be more reactive than models relying on rolling window estimations. We find, on average, spillovers to explain 37% of the variation in our sample, amid a strong variation of the latter over time.
    Keywords: CDS, spillover, sovereign debt, systemic risk, SVAR, identification by heteroskedasticity
    JEL: C58 G01 G18 G21
    Date: 2021
  25. By: Tzougas, George; Jeong, Himchan
    Abstract: This article presents the Exponential–Generalized Inverse Gaussian regression model with varying dispersion and shape. The EGIG is a general distribution family which, under the adopted modelling framework, can provide the appropriate level of flexibility to fit moderate costs with high frequencies and heavy-tailed claim sizes, as they both represent significant proportions of the total loss in non-life insurance. The model’s implementation is illustrated by a real data application which involves fitting claim size data from a European motor insurer. The maximum likelihood estimation of the model parameters is achieved through a novel Expectation Maximization (EM)-type algorithm that is computationally tractable and is demonstrated to perform satisfactorily.
    Keywords: Exponential–Generalized Inverse Gaussian Distribution; EM Algorithm; regression models for the mean; dispersion and shape parameters; non-life insurance; heavy-tailed losses
    JEL: C1
    Date: 2021–01–08
  26. By: Xavier Gabaix; Ralph S. J. Koijen
    Abstract: We propose a new way to construct instruments in a broad class of economic environments: “granular instrumental variables” (GIVs). In the economies we study, a few large firms, in- dustries or countries account for an important share of economic activity. As the idiosyncratic shocks from these large players affect aggregate outcomes, they are valid and often powerful instruments. We provide a methodology to extract idiosyncratic shocks from the data in order to create GIVs, which are size-weighted sums of idiosyncratic shocks. These GIVs allow us to then estimate parameters of interest, including causal elasticities and multipliers. We first illustrate the idea in a basic supply and demand framework: we achieve a novel identification of both supply and demand elasticities based on idiosyncratic shocks to either supply or demand. We then show how the procedure can be enriched to work in many sit- uations. We provide illustrations of the procedure with two applications. First, we measure how “sovereign yield shocks” transmit across countries in the Eurozone. Second, we estimate short-term supply and demand multipliers and elasticities in the oil market. Our estimates match existing ones that use more complex and labor-intensive (e.g., narrative) methods. We sketch how GIVs could be useful to estimate a host of other causal parameters in economics.
    JEL: C01 E0 F0 G0
    Date: 2020–12
  27. By: Bora Kim
    Abstract: Randomized experiments have become a standard tool in economics. In analyzing randomized experiments, the traditional approach has been based on the Stable Unit Treatment Value (SUTVA: \cite{rubin}) assumption which dictates that there is no interference between individuals. However, the SUTVA assumption fails to hold in many applications due to social interaction, general equilibrium, and/or externality effects. While much progress has been made in relaxing the SUTVA assumption, most of this literature has only considered a setting with perfect compliance to treatment assignment. In practice, however, noncompliance occurs frequently where the actual treatment receipt is different from the assignment to the treatment. In this paper, we study causal effects in randomized experiments with network interference and noncompliance. Spillovers are allowed to occur at both treatment choice stage and outcome realization stage. In particular, we explicitly model treatment choices of agents as a binary game of incomplete information where resulting equilibrium treatment choice probabilities affect outcomes of interest. Outcomes are further characterized by a random coefficient model to allow for general unobserved heterogeneity in the causal effects. After defining our causal parameters of interest, we propose a simple control function estimator and derive its asymptotic properties under large-network asymptotics. We apply our methods to the randomized subsidy program of \cite{dupas} where we find evidence of spillover effects on both short-run and long-run adoption of insecticide-treated bed nets. Finally, we illustrate the usefulness of our methods by analyzing the impact of counterfactual subsidy policies.
    Date: 2020–12
  28. By: Matthew A. Masten; Alexandre Poirier; Linqi Zhang
    Abstract: This paper provides a set of methods for quantifying the robustness of treatment effects estimated using the unconfoundedness assumption (also known as selection on observables or conditional independence). Specifically, we estimate and do inference on bounds on various treatment effect parameters, like the average treatment effect (ATE) and the average effect of treatment on the treated (ATT), under nonparametric relaxations of the unconfoundedness assumption indexed by a scalar sensitivity parameter c. These relaxations allow for limited selection on unobservables, depending on the value of c. For large enough c, these bounds equal the no assumptions bounds. Using a non-standard bootstrap method, we show how to construct confidence bands for these bound functions which are uniform over all values of c. We illustrate these methods with an empirical application to effects of the National Supported Work Demonstration program. We implement these methods in a companion Stata module for easy use in practice.
    Date: 2020–12
  29. By: Manganelli, Simone
    Abstract: A decision maker tests whether the gradient of the loss function evaluated at a judgmental decision is zero. If the test does not reject, the action is the judgmental decision. If the test rejects, the action sets the gradient equal to the boundary of the rejection region. This statistical decision rule is admissible and conditions on the sample realization. The confidence level reflects the decision maker’s aversion to statistical uncertainty. The decision rule is applied to a problem of asset allocation. JEL Classification: C1, C11, C12, C13, D81
    Keywords: conditional inference., confidence intervals, hypothesis testing, statistical decision theory
    Date: 2021–01
  30. By: Chirok Han
    Abstract: For a panel model considered by Abadie et al. (2010), the counterfactual outcomes constructed by Abadie et al., Hsiao et al. (2012), and Doudchenko and Imbens (2017) may all be confounded by uncontrolled heterogenous trends. Based on exact-matching on the trend predictors, I propose new methods of estimating the model-specific treatment effects, which are free from heterogenous trends. When applied to Abadie et al.'s (2010) model and data, the new estimators suggest considerably smaller effects of California's tobacco control program.
    Date: 2020–12
  31. By: Kees Jan van Garderen; Noud van Giersbergen
    Abstract: Testing for mediation effects is empirically important and theoretically interesting. It is important in psychology, medicine, economics, accountancy, and marketing for instance, generating over 90,000 citations to a single key paper in the field. It also leads to a statistically interesting and long-standing problem that this paper solves. The no-mediation hypothesis, expressed as $H_{0}:\theta_{1}\theta_{2}=0$, defines a manifold that is non-regular in the origin where rejection probabilities of standard tests are extremely low. We propose a general method for obtaining near similar tests using a flexible $g$-function to bound the critical region. We prove that no similar test exists for mediation, but using our new varying $g$-method obtain a test that is all but similar and easy to use in practice. We derive tight upper bounds to similar and nonsimilar power envelopes and derive an optimal test. We extend the test to higher dimensions and illustrate the results in a trade union sentiment application.
    Date: 2020–12
  32. By: Jos\'e Vin\'icius de Miranda Cardoso; Jiaxi Ying; Daniel Perez Palomar
    Abstract: In the past two decades, the field of applied finance has tremendously benefited from graph theory. As a result, novel methods ranging from asset network estimation to hierarchical asset selection and portfolio allocation are now part of practitioners' toolboxes. In this paper, we investigate the fundamental problem of learning undirected graphical models under Laplacian structural constraints from the point of view of financial market times series data. In particular, we present natural justifications, supported by empirical evidence, for the usage of the Laplacian matrix as a model for the precision matrix of financial assets, while also establishing a direct link that reveals how Laplacian constraints are coupled to meaningful physical interpretations related to the market index factor and to conditional correlations between stocks. Those interpretations lead to a set of guidelines that practitioners should be aware of when estimating graphs in financial markets. In addition, we design numerical algorithms based on the alternating direction method of multipliers to learn undirected, weighted graphs that take into account stylized facts that are intrinsic to financial data such as heavy tails and modularity. We illustrate how to leverage the learned graphs into practical scenarios such as stock time series clustering and foreign exchange network estimation. The proposed graph learning algorithms outperform the state-of-the-art methods in an extensive set of practical experiments. Furthermore, we obtain theoretical and empirical convergence results for the proposed algorithms. Along with the developed methodologies for graph learning in financial markets, we release an R package, called fingraph, accommodating the code and data to obtain all the experimental results.
    Date: 2020–12

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.