
on Econometrics 
By:  Gabriel Montes Rojas (Instituto Interdisciplinario de Economía Política de Buenos Aires  UBA  CONICET); Andrés Sebastián Mena (Instituto Superior de Estudios Sociales  CONICET) 
Abstract:  We propose two novel bootstrap density estimators based on the quantile variance and the quantilemean covariance. We review previous developments on quantiledensity estimation and asymptotic results in the literature that can be applied to this case. We conduct Monte Carlo simulations for dierent data generating processes, sample sizes, and parameters. The estimators perform well in comparison to benchmark nonparametric kernel density estimator. Some of the explored smoothing techniques present lower bias and mean integrated squared errors, which indicates that the proposed estimator is a promising strategy. 
Keywords:  Density Estimation, Quantile Variance, QuantileMean Covariance, Bootstrap 
JEL:  C13 C14 C15 C46 
URL:  http://d.repec.org/n?u=RePEc:ake:iiepdt:202050&r=all 
By:  Wang, Xiaohu (Fudan University); Xiao, Weilin (Zhejiang University); Yu, Jun (School of Economics, Singapore Management University) 
Abstract:  This paper derives asymptotic properties of the least squares estimator of the autoregressive parameter in local to unity processes with errors being fractional Gaussian noises with the Hurst parameter H. It is shown that the estimator is consistent when H ∈ (0, 1). Moreover, the rate of convergence is n when H ∈ [0.5, 1). The rate of convergence is n2H when H ∈ (0, 0.5). Furthermore, the limit distribution of the centered least squares estimator depends on H. When H = 0.5, the limit distribution is the same as that obtained in Phillips (1987a) for the local to unity model with errors for which the standard functional central theorem is applicable. When H > 0.5 or when H 
Keywords:  Least squares; Local to unity; Fractional Brownian motion; Fractional OrnsteinUhlenbeck process 
JEL:  C22 
Date:  2020–12–23 
URL:  http://d.repec.org/n?u=RePEc:ris:smuesw:2020_027&r=all 
By:  Jayeeta Bhattacharya 
Abstract:  We study linear quantile regression models when regressors and/or dependent variable are not directly observed but estimated in an initial first step and used in the second step quantile regression for estimating the quantile parameters. This general class of generated quantile regression (GQR) covers various statistical applications, for instance, estimation of endogenous quantile regression models and triangular structural equation models, and some new relevant applications are discussed. We study the asymptotic distribution of the twostep estimator, which is challenging because of the presence of generated covariates and/or dependent variable in the nonsmooth quantile regression estimator. We employ techniques from empirical process theory to find uniform Bahadur expansion for the two step estimator, which is used to establish the asymptotic results. We illustrate the performance of the GQR estimator through simulations and an empirical application based on auctions. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.13614&r=all 
By:  Sam Ouliaris; Adrian Pagan 
Abstract:  When sign restrictions are used in SVARs impulse responses are only set identified. If sign restrictions are just given for a single shock the shocks may not be separated, and so the resulting structural equations can be unacceptable. Thus, in a supply demand model, if only signs are given for the impulse responses to a demand shock this may result in two supply curves being in the SVAR. One needs to find the identified set so that this effect is excluded. Granziera el al’s (2018) frequentist approach to inference potentially suffers from this issue. One also has to recognize that the identified set should be adjusted so that it produces responses to the same size shock. Finally, because researchers are often unwilling to set out sign restrictions to separate all shocks, we describe how this can be done with a SVAR/VAR system rather than a straight SVAR. 
Keywords:  SVAR, Sign Restrictions, Identified Set 
JEL:  E37 C51 C52 
Date:  2020–11 
URL:  http://d.repec.org/n?u=RePEc:een:camaaa:2020101&r=all 
By:  Mochen Yang; Edward McFowland III; Gordon Burtch; Gediminas Adomavicius 
Abstract:  Combining machine learning with econometric analysis is becoming increasingly prevalent in both research and practice. A common empirical strategy involves the application of predictive modeling techniques to 'mine' variables of interest from available data, followed by the inclusion of those variables into an econometric framework, with the objective of estimating causal effects. Recent work highlights that, because the predictions from machine learning models are inevitably imperfect, econometric analyses based on the predicted variables are likely to suffer from bias due to measurement error. We propose a novel approach to mitigate these biases, leveraging the ensemble learning technique known as the random forest. We propose employing random forest not just for prediction, but also for generating instrumental variables to address the measurement error embedded in the prediction. The random forest algorithm performs best when comprised of a set of trees that are individually accurate in their predictions, yet which also make 'different' mistakes, i.e., have weakly correlated prediction errors. A key observation is that these properties are closely related to the relevance and exclusion requirements of valid instrumental variables. We design a datadriven procedure to select tuples of individual trees from a random forest, in which one tree serves as the endogenous covariate and the other trees serve as its instruments. Simulation experiments demonstrate the efficacy of the proposed approach in mitigating estimation biases and its superior performance over three alternative methods for bias correction. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.10790&r=all 
By:  Sayar Karmakar; Marek Chudy; Wei Biao Wu 
Abstract:  Accurate forecasting is one of the fundamental focus in the literature of econometric timeseries. Often practitioners and policy makers want to predict outcomes of an entire time horizon in the future instead of just a single $k$step ahead prediction. These series, apart from their own possible nonlinear dependence, are often also influenced by many external predictors. In this paper, we construct prediction intervals of timeaggregated forecasts in a highdimensional regression setting. Our approach is based on quantiles of residuals obtained by the popular LASSO routine. We allow for general heavytailed, longmemory, and nonlinear stationary error process and stochastic predictors. Through a series of systematically arranged consistency results we provide theoretical guarantees of our proposed quantilebased method in all of these scenarios. After validating our approach using simulations we also propose a novel bootstrap based method that can boost the coverage of the theoretical intervals. Finally analyzing the EPEX Spot data, we construct prediction intervals for hourly electricity prices over horizons spanning 17 weeks and contrast them to selected Bayesian and bootstrap interval forecasts. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.08223&r=all 
By:  Gregory Cox 
Abstract:  When parameters are weakly identified, bounds on the parameters may provide a valuable source of information. Existing weak identification estimation and inference results are unable to combine weak identification with bounds. Within a class of minimum distance models, this paper proposes identificationrobust inference that incorporates information from bounds when parameters are weakly identified. The inference is based on limit theory that combines weak identification theory (Andrews and Cheng (2012)) with parameterontheboundary theory (Andrews (1999)) via a new argmax theorem. This paper characterizes weak identification in lowdimensional factor models (due to weak factors) and demonstrates the role of the bounds and identificationrobust inference in two example factor models. This paper also demonstrates the identificationrobust inference in an empirical application: estimating the effects of a randomized intervention on parental investments in children, where parental investments are modeled by a factor model. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.11222&r=all 
By:  Timothy B. Armstrong; Michal Koles\'ar; Soonwoo Kwon 
Abstract:  We consider inference on a regression coefficient under a constraint on the magnitude of the control coefficients. We show that a class of estimators based on an auxiliary regularized regression of the regressor of interest on control variables exactly solves a tradeoff between worstcase bias and variance. We derive "biasaware" confidence intervals (CIs) based on these estimators, which take into account possible bias when forming the critical value. We show that these estimators and CIs are nearoptimal in finite samples for mean squared error and CI length. Our finitesample results are based on an idealized setting with normal regression errors with known homoskedastic variance, and we provide conditions for asymptotic validity with unknown and possibly heteroskedastic error distribution. Focusing on the case where the constraint on the magnitude of control coefficients is based on an $\ell_p$ norm ($p\ge 1$), we derive rates of convergence for optimal estimators and CIs under highdimensional asymptotics that allow the number of regressors to increase more quickly than the number of observations. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.14823&r=all 
By:  Gabriel Montes Rojas (Instituto Interdisciplinario de Economía Política de Buenos Aires  UBA  CONICET) 
Abstract:  This paper develops a subgraph network random effects error components for network data regression models. In particular, it allows for edge and triangle specific components, which serve as a basal model for modeling network effects. It then evaluates the potential effects of ignoring network effects in the estimation of the variancecovariance matrix. It also proposes consistent estimator of the variance components and Lagrange Multiplier tests for evaluating the appropriate model of random components in networks. Monte Carlo simulations show that the tests have good performance in finite samples. It applies the proposed tests to the Call interbank market in Argentina. 
Keywords:  Networks, Clusters, Moulton Factor 
JEL:  C2 C12 
URL:  http://d.repec.org/n?u=RePEc:ake:iiepdt:201944&r=all 
By:  Marc Hallin 
Abstract:  Unlike the real line, the real space, in dimension $d\geq 2$, is not canonically ordered. As a consequence, extending to a multivariate context fundamental univariate statistical tools such as quantiles, signs, and ranks is anything but obvious. Tentative definitions have been proposed in the literature but do not enjoy the basic properties (e.g. distributionfreeness of ranks, their independence with respect to the order statistic, their independence with respect to signs, etc.) they are expected to satisfy. Based on measure transportation ideas, new concepts of distribution and quantile functions, ranks, and signs have been proposed recently that, unlike previous attempts, do satisfy these properties. These ranks, signs, and quantiles have been used, quite successfully, in several inference problems and have triggered, in a short span of time, a number of applications: fully distributionfree testing for multipleoutput regression, MANOVA, and VAR models, Restimation for VARMA parameters, distributionfree testing for vector independence, multipleoutput quantile regression, nonlinear independent component analysis, etc. 
Keywords:  Measure transportation; statistical decision theory 
JEL:  C44 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/318373&r=all 
By:  Pacifico, Antonio 
Abstract:  The paper suggests and develops a computational approach to improve hierarchical fuzzy clustering timeseries analysis when accounting for high dimensional and noise problems in dynamic data. A Robust Weighted Distance measure between pairs of sets of AutoRegressive Integrated Moving Average models is used. It is robust because Bayesian Model Selection methodology is performed with a set of conjugate informative priors in order to discover the most probable set of clusters capturing different dynamics and interconnections among timevarying data, and weighted because each timeseries is 'adjusted' by own Posterior Model Size distribution in order to group dynamic data objects into 'ad hoc' homogenous clusters. Monte Carlo methods are used to compute exact posterior probabilities for each cluster chosen and thus avoid the problem of increasing the overall probability of errors that plagues classical statistical methods based on significance tests. Empirical and simulated examples describe the functioning and the performance of the procedure. Discussions with related works and possible extensions of the methodology to jointly deal with endogeneity issues and misspecified dynamics in high dimensional multicountry setups are also displayed. 
Keywords:  Distance Measures; Fuzzy Clustering; ARIMA TimeSeries; Bayesian Model Selection; MCMC Integrations. 
JEL:  C1 C52 C61 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:104379&r=all 
By:  Billy Ferguson; Brad Ross 
Abstract:  We propose a sensitivity analysis for Synthetic Control (SC) treatment effect estimates to interrogate the assumption that the SC method is wellspecified, namely that choosing weights to minimize pretreatment prediction error yields accurate predictions of counterfactual posttreatment outcomes. Our datadriven procedure recovers the set of treatment effects consistent with the assumption that the misspecification error incurred by the SC method is at most the observable misspecification error incurred when using the SC estimator to predict the outcomes of some control unit. We show that under one definition of misspecification error, our procedure provides a simple, geometric motivation for comparing the estimated treatment effect to the distribution of placebo residuals to assess estimate credibility. When applied to several canonical studies that use the SC method, our procedure demonstrates that the signs of most of those results are relatively robust. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.15367&r=all 
By:  Philip Erickson 
Abstract:  The conditional logit model is a standard workhorse approach to estimating customers' product feature preferences using choice data. Using these models at scale, however, can result in numerical imprecision and optimization failure due to a combination of largevalued covariates and the softmax probability function. Standard machine learning approaches alleviate these concerns by applying a normalization scheme to the matrix of covariates, scaling all values to sit within some interval (such as the unit simplex). While this type of normalization is innocuous when using models for prediction, it has the side effect of perturbing the estimated coefficients, which are necessary for researchers interested in inference. This paper shows that, for two common classes of normalizers, designated scaling and centered scaling, the datagenerating nonscaled model parameters can be analytically recovered along with their asymptotic distributions. The paper also shows the numerical performance of the analytical results using an example of a scaling normalizer. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.08022&r=all 
By:  Dongcheng Zhang; Kunpeng Zhang 
Abstract:  Existing weighting methods for treatment effect estimation are often built upon the idea of propensity scores or covariate balance. They usually impose strong assumptions on treatment assignment or outcome model to obtain unbiased estimation, such as linearity or specific functional forms, which easily leads to the major drawback of model misspecification. In this paper, we aim to alleviate these issues by developing a distribution learningbased weighting method. We first learn the true underlying distribution of covariates conditioned on treatment assignment, then leverage the ratio of covariates' density in the treatment group to that of the control group as the weight for estimating treatment effects. Specifically, we propose to approximate the distribution of covariates in both treatment and control groups through invertible transformations via change of variables. To demonstrate the superiority, robustness, and generalizability of our method, we conduct extensive experiments using synthetic and real data. From the experiment results, we find that our method for estimating average treatment effect on treated (ATT) with observational data outperforms several cuttingedge weightingonly benchmarking methods, and it maintains its advantage under a doublyrobust estimation framework that combines weighting with some advanced outcome modeling methods. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.13805&r=all 
By:  Eiji Kurozumi; Anton Skrobotov; Alexey Tsarev 
Abstract:  This paper is devoted to testing for the explosive bubble under timevarying nonstationary volatility. Because the limiting distribution of the seminal Phillips et al. (2011) test depends on the variance function and usually requires a bootstrap implementation under heteroskedasticity, we construct the test based on a deformation of the time domain. The proposed test is asymptotically pivotal under the null hypothesis and its limiting distribution coincides with that of the standard test under homoskedasticity, so that the test does not require computationally extensive methods for inference. Appealing finite sample properties are demonstrated through MonteCarlo simulations. An empirical application demonstrates that the upsurge behavior of cryptocurrency time series in the middle of the sample is partially explained by the volatility change. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.13937&r=all 
By:  Victor Chernozhukov; Whitney Newey; Rahul Singh; Vasilis Syrgkanis 
Abstract:  We provide an adversarial approach to estimating Riesz representers of linear functionals within arbitrary function spaces. We prove oracle inequalities based on the localized Rademacher complexity of the function space used to approximate the Riesz representer and the approximation error. These inequalities imply fast finite sample meansquarederror rates for many function spaces of interest, such as highdimensional sparse linear functions, neural networks and reproducing kernel Hilbert spaces. Our approach offers a new way of estimating Riesz representers with a plethora of recently introduced machine learning techniques. We show how our estimator can be used in the context of debiasing structural/causal parameters in semiparametric models, for automated orthogonalization of moment equations and for estimating the stochastic discount factor in the context of asset pricing. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.00009&r=all 
By:  Bryan S. Graham; Fengshi Niu; James L. Powell 
Abstract:  Let $i=1,\ldots,N$ index a simple random sample of units drawn from some large population. For each unit we observe the vector of regressors $X_{i}$ and, for each of the $N\left(N1\right)$ ordered pairs of units, an outcome $Y_{ij}$. The outcomes $Y_{ij}$ and $Y_{kl}$ are independent if their indices are disjoint, but dependent otherwise (i.e., "dyadically dependent"). Let $W_{ij}=\left(X_{i}',X_{j}'\right)'$; using the sampled data we seek to construct a nonparametric estimate of the mean regression function $g\left(W_{ij}\right)\overset{def}{\equiv}\mathbb{E}\left[\left.Y_{ij}\rightX_{i},X_{j}\right].$ We present two sets of results. First, we calculate lower bounds on the minimax risk for estimating the regression function at (i) a point and (ii) under the infinity norm. Second, we calculate (i) pointwise and (ii) uniform convergence rates for the dyadic analog of the familiar NadarayaWatson (NW) kernel regression estimator. We show that the NW kernel regression estimator achieves the optimal rates suggested by our risk bounds when an appropriate bandwidth sequence is chosen. This optimal rate differs from the one available under iid data: the effective sample size is smaller and $d_W=\mathrm{dim}(W_{ij})$ influences the rate differently. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.08444&r=all 
By:  Wuthrich, Kaspar 
Keywords:  Instrumental variables, Conditional and unconditional quantile, treatment effects, Distribution regression, Exchangeable bootstrap, Econometrics, Statistics, Applied Economics 
Date:  2019–06–01 
URL:  http://d.repec.org/n?u=RePEc:cdl:ucsdec:qt99n9197q&r=all 
By:  D\'esir\'e K\'edagni; Lixiong Li; Isma\"el Mourifi\'e 
Abstract:  In many set identified models, it is difficult to obtain a tractable characterization of the identified set, therefore, empirical works often construct confidence region based on an outer set of the identified set. Because an outer set is always a superset of the identified set, this practice is often viewed as conservative yet valid. However, this paper shows that, when the model is refuted by the data, an nonempty outer set could deliver conflicting results with another outer set derived from the same underlying model structure, so that the results of outer sets could be misleading in the presence of misspecification. We provide a sufficient condition for the existence of discordant outer sets which covers models characterized by intersection bounds and the Artstein(1983) inequalities. Furthermore, we develop a method to salvage misspecified models. We consider all minimum relaxations of a refuted model which restore dataconsistency. We find that the union of the identified sets of these minimum relaxations is misspecificationrobust and it has a new and intuitive empirical interpretation. Although this paper primarily focuses on discrete relaxations, our new interpretation also applies to continuous relaxations. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.11679&r=all 
By:  Kajal Lahiri; Huaming Peng; Xuguang Sheng 
Abstract:  We have argued that from the standpoint of a policy maker who has access to a number of expert forecasts, the uncertainty of a combined forecast should be interpreted as that of a typical forecaster randomly drawn from the pool. With a standard factor decomposition of a panel of forecasts, we show that the uncertainty of a typical forecaster can be expressed as the disagreement among the forecasters plus the volatility of the common shock. Using new statistics to test for the homogeneity of idiosyncratic errors under the joint limits with both T and n approaching infinity simultaneously, we find that some previously used measures significantly underestimate the conceptually correct benchmark forecast uncertainty. 
Keywords:  disagreement, forecast combination, panel data, uncertainty 
JEL:  C12 C33 E37 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:ces:ceswps:_8810&r=all 
By:  Lopes Moreira Da Veiga, María Helena; Rue, Havard; Marín Díazaraque, Juan Miguel; Zea Bermudez, P. De 
Abstract:  The aim of the paper is to implement the integrated nested Laplace (INLA) approximations,known to be very fast and efficient, for a threshold stochastic volatility model. INLAreplaces MCMC simulations with accurate deterministic approximations. We use properal though not very informative priors and Penalizing Complexity (PC) priors. The simulation results favor the use of PC priors, specially when the sample size varies from small to moderate. For these sample sizes, they provide more accurate estimates of the model'sparameters, but as sample size increases both type of priors lead to reliable estimates of the parameters. We also validate the estimation method insample and outofsample by applying it to six series of returns including stock market, commodity and crypto currency returns and by forecasting their onedayahead volatilities, respectively. Our empirical results support that the TSV model does a good job in forecasting the onedayahead volatility of stock market and gold returns but faces difficulties when the volatility of returns is extreme, which occurs in the case of cryptocurrencies. 
Keywords:  Threshold Stochastic Volatility Model; Pc Priors; Inla 
JEL:  C58 C52 C32 C13 
Date:  2021–01–27 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:31804&r=all 
By:  Justyna Wr\'oblewska 
Abstract:  The paper aims at developing the Bayesian seasonally cointegrated model for quarterly data. We propose the prior structure, derive the set of full conditional posterior distributions, and propose the sampling scheme. The identification of cointegrating spaces is obtained \emph{via} orthonormality restrictions imposed on vectors spanning them. In the case of annual frequency, the cointegrating vectors are complex, which should be taken into account when identifying them. The point estimation of the cointegrating spaces is also discussed. The presented methods are illustrated by a simulation experiment and are employed in the analysis of money and prices in the Polish economy. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.14820&r=all 
By:  Mehmet Caner; Kfir Eliaz 
Abstract:  We consider situations where a user feeds her attributes to a machine learning method that tries to predict her best option based on a random sample of other users. The predictor is incentivecompatible if the user has no incentive to misreport her covariates. Focusing on the popular Lasso estimation technique, we borrow tools from highdimensional statistics to characterize sufficient conditions that ensure that Lasso is incentive compatible in large samples. In particular, we show that incentive compatibility is achieved if the tuning parameter is kept above some threshold. We present simulations that illustrate how this can be done in practice. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.01144&r=all 
By:  Boeckelmann Lukas; StallaBourdillon Arthur 
Abstract:  We propose a novel approach to quantify spillovers on financial markets based on a structural version of the DieboldYilmaz framework. Key to our approach is a SVARGARCH model that is statistically identified by heteroskedasticity, economically identified by maximum shock contribution and that allows for timevarying forecast error variance decompositions. We analyze credit risk spillovers between EZ sovereign and bank CDS. Methodologically, we find the model to better match economic narratives compared with common spillover approaches and to be more reactive than models relying on rolling window estimations. We find, on average, spillovers to explain 37% of the variation in our sample, amid a strong variation of the latter over time. 
Keywords:  CDS, spillover, sovereign debt, systemic risk, SVAR, identification by heteroskedasticity 
JEL:  C58 G01 G18 G21 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:bfr:banfra:798&r=all 
By:  Tzougas, George; Jeong, Himchan 
Abstract:  This article presents the Exponential–Generalized Inverse Gaussian regression model with varying dispersion and shape. The EGIG is a general distribution family which, under the adopted modelling framework, can provide the appropriate level of flexibility to fit moderate costs with high frequencies and heavytailed claim sizes, as they both represent significant proportions of the total loss in nonlife insurance. The model’s implementation is illustrated by a real data application which involves fitting claim size data from a European motor insurer. The maximum likelihood estimation of the model parameters is achieved through a novel Expectation Maximization (EM)type algorithm that is computationally tractable and is demonstrated to perform satisfactorily. 
Keywords:  Exponential–Generalized Inverse Gaussian Distribution; EM Algorithm; regression models for the mean; dispersion and shape parameters; nonlife insurance; heavytailed losses 
JEL:  C1 
Date:  2021–01–08 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:108210&r=all 
By:  Xavier Gabaix; Ralph S. J. Koijen 
Abstract:  We propose a new way to construct instruments in a broad class of economic environments: “granular instrumental variables” (GIVs). In the economies we study, a few large firms, in dustries or countries account for an important share of economic activity. As the idiosyncratic shocks from these large players affect aggregate outcomes, they are valid and often powerful instruments. We provide a methodology to extract idiosyncratic shocks from the data in order to create GIVs, which are sizeweighted sums of idiosyncratic shocks. These GIVs allow us to then estimate parameters of interest, including causal elasticities and multipliers. We first illustrate the idea in a basic supply and demand framework: we achieve a novel identification of both supply and demand elasticities based on idiosyncratic shocks to either supply or demand. We then show how the procedure can be enriched to work in many sit uations. We provide illustrations of the procedure with two applications. First, we measure how “sovereign yield shocks” transmit across countries in the Eurozone. Second, we estimate shortterm supply and demand multipliers and elasticities in the oil market. Our estimates match existing ones that use more complex and laborintensive (e.g., narrative) methods. We sketch how GIVs could be useful to estimate a host of other causal parameters in economics. 
JEL:  C01 E0 F0 G0 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:28204&r=all 
By:  Bora Kim 
Abstract:  Randomized experiments have become a standard tool in economics. In analyzing randomized experiments, the traditional approach has been based on the Stable Unit Treatment Value (SUTVA: \cite{rubin}) assumption which dictates that there is no interference between individuals. However, the SUTVA assumption fails to hold in many applications due to social interaction, general equilibrium, and/or externality effects. While much progress has been made in relaxing the SUTVA assumption, most of this literature has only considered a setting with perfect compliance to treatment assignment. In practice, however, noncompliance occurs frequently where the actual treatment receipt is different from the assignment to the treatment. In this paper, we study causal effects in randomized experiments with network interference and noncompliance. Spillovers are allowed to occur at both treatment choice stage and outcome realization stage. In particular, we explicitly model treatment choices of agents as a binary game of incomplete information where resulting equilibrium treatment choice probabilities affect outcomes of interest. Outcomes are further characterized by a random coefficient model to allow for general unobserved heterogeneity in the causal effects. After defining our causal parameters of interest, we propose a simple control function estimator and derive its asymptotic properties under largenetwork asymptotics. We apply our methods to the randomized subsidy program of \cite{dupas} where we find evidence of spillover effects on both shortrun and longrun adoption of insecticidetreated bed nets. Finally, we illustrate the usefulness of our methods by analyzing the impact of counterfactual subsidy policies. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.13710&r=all 
By:  Matthew A. Masten; Alexandre Poirier; Linqi Zhang 
Abstract:  This paper provides a set of methods for quantifying the robustness of treatment effects estimated using the unconfoundedness assumption (also known as selection on observables or conditional independence). Specifically, we estimate and do inference on bounds on various treatment effect parameters, like the average treatment effect (ATE) and the average effect of treatment on the treated (ATT), under nonparametric relaxations of the unconfoundedness assumption indexed by a scalar sensitivity parameter c. These relaxations allow for limited selection on unobservables, depending on the value of c. For large enough c, these bounds equal the no assumptions bounds. Using a nonstandard bootstrap method, we show how to construct confidence bands for these bound functions which are uniform over all values of c. We illustrate these methods with an empirical application to effects of the National Supported Work Demonstration program. We implement these methods in a companion Stata module for easy use in practice. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.15716&r=all 
By:  Manganelli, Simone 
Abstract:  A decision maker tests whether the gradient of the loss function evaluated at a judgmental decision is zero. If the test does not reject, the action is the judgmental decision. If the test rejects, the action sets the gradient equal to the boundary of the rejection region. This statistical decision rule is admissible and conditions on the sample realization. The confidence level reflects the decision maker’s aversion to statistical uncertainty. The decision rule is applied to a problem of asset allocation. JEL Classification: C1, C11, C12, C13, D81 
Keywords:  conditional inference., confidence intervals, hypothesis testing, statistical decision theory 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20212512&r=all 
By:  Chirok Han 
Abstract:  For a panel model considered by Abadie et al. (2010), the counterfactual outcomes constructed by Abadie et al., Hsiao et al. (2012), and Doudchenko and Imbens (2017) may all be confounded by uncontrolled heterogenous trends. Based on exactmatching on the trend predictors, I propose new methods of estimating the modelspecific treatment effects, which are free from heterogenous trends. When applied to Abadie et al.'s (2010) model and data, the new estimators suggest considerably smaller effects of California's tobacco control program. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.08988&r=all 
By:  Kees Jan van Garderen; Noud van Giersbergen 
Abstract:  Testing for mediation effects is empirically important and theoretically interesting. It is important in psychology, medicine, economics, accountancy, and marketing for instance, generating over 90,000 citations to a single key paper in the field. It also leads to a statistically interesting and longstanding problem that this paper solves. The nomediation hypothesis, expressed as $H_{0}:\theta_{1}\theta_{2}=0$, defines a manifold that is nonregular in the origin where rejection probabilities of standard tests are extremely low. We propose a general method for obtaining near similar tests using a flexible $g$function to bound the critical region. We prove that no similar test exists for mediation, but using our new varying $g$method obtain a test that is all but similar and easy to use in practice. We derive tight upper bounds to similar and nonsimilar power envelopes and derive an optimal test. We extend the test to higher dimensions and illustrate the results in a trade union sentiment application. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.11342&r=all 
By:  Jos\'e Vin\'icius de Miranda Cardoso; Jiaxi Ying; Daniel Perez Palomar 
Abstract:  In the past two decades, the field of applied finance has tremendously benefited from graph theory. As a result, novel methods ranging from asset network estimation to hierarchical asset selection and portfolio allocation are now part of practitioners' toolboxes. In this paper, we investigate the fundamental problem of learning undirected graphical models under Laplacian structural constraints from the point of view of financial market times series data. In particular, we present natural justifications, supported by empirical evidence, for the usage of the Laplacian matrix as a model for the precision matrix of financial assets, while also establishing a direct link that reveals how Laplacian constraints are coupled to meaningful physical interpretations related to the market index factor and to conditional correlations between stocks. Those interpretations lead to a set of guidelines that practitioners should be aware of when estimating graphs in financial markets. In addition, we design numerical algorithms based on the alternating direction method of multipliers to learn undirected, weighted graphs that take into account stylized facts that are intrinsic to financial data such as heavy tails and modularity. We illustrate how to leverage the learned graphs into practical scenarios such as stock time series clustering and foreign exchange network estimation. The proposed graph learning algorithms outperform the stateoftheart methods in an extensive set of practical experiments. Furthermore, we obtain theoretical and empirical convergence results for the proposed algorithms. Along with the developed methodologies for graph learning in financial markets, we release an R package, called fingraph, accommodating the code and data to obtain all the experimental results. 
Date:  2020–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2012.15410&r=all 