
on Econometrics 
By:  Christopher R. Dobronyi; Fu Ouyang; Thomas Tao Yang 
Abstract:  This paper revisits the identification and estimation of a class of semiparametric (distributionfree) panel data binary choice models with lagged dependent variables, exogenous covariates, and entity fixed effects. Using an "identification at infinity" argument, we show that the model is point identified in the presence of a freevarying continuous covariate. In contrast with the celebrated Honore and Kyriazidou (2000), our method permits time trends of any form and does not suffer from the "curse of dimensionality". We propose an easily implementable conditional maximum score estimator. The asymptotic properties of the proposed estimator are fully characterized. A smallscale Monte Carlo study demonstrates that our approach performs satisfactorily in finite samples. We illustrate the usefulness of our method by presenting an empirical application to enrollment in private hospital insurance using the HILDA survey data. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.09379&r=ecm 
By:  Adam Baybutt; Manu Navjeevan 
Abstract:  Plausible identification of conditional average treatment effects (CATEs) may rely on controlling for a large number of variables to account for confounding factors. In these highdimensional settings, estimation of the CATE requires estimating firststage models whose consistency relies on correctly specifying their parametric forms. While doublyrobust estimators of the CATE exist, inference procedures based on the second stage CATE estimator are not doublyrobust. Using the popular augmented inverse propensity weighting signal, we propose an estimator for the CATE whose resulting Waldtype confidence intervals are doublyrobust. We assume a logistic model for the propensity score and a linear model for the outcome regression, and estimate the parameters of these models using an $\ell_1$ (Lasso) penalty to address the high dimensional covariates. Our proposed estimator remains consistent at the nonparametric rate and our proposed pointwise and uniform confidence intervals remain asymptotically valid even if one of the logistic propensity score or linear outcome regression models are misspecified. These results are obtained under similar conditions to existing analyses in the highdimensional and nonparametric literatures. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.06283&r=ecm 
By:  Javier Alejo; Antonio F. Galvao; Julian MartinezIriarte; Gabriel MontesRojas 
Abstract:  This paper develops a semiparametric procedure for estimation of unconditional quantile partial effects using quantile regression coefficients. The main result is based on the fact that, for continuous covariates, unconditional quantile effects are a weighted average of conditional ones at particular quantile levels that depend on the covariates. We propose a twostep estimator for the unconditional effects where in the first step one estimates a structural quantile regression model, and in the second step a nonparametric regression is applied to the first step coefficients. We establish the asymptotic properties of the estimator, say consistency and asymptotic normality. Monte Carlo simulations show numerical evidence that the estimator has very good finite sample performance and is robust to the selection of bandwidth and kernel. To illustrate the proposed method, we study the canonical application of the Engel's curve, i.e. food expenditures as a share of income. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.07241&r=ecm 
By:  Andreas Hagemann 
Abstract:  I introduce a generic method for inference on entire quantile and regression quantile processes in the presence of a finite number of large and arbitrarily heterogeneous clusters. The method asymptotically controls size by generating statistics that exhibit enough distributional symmetry such that randomization tests can be applied. The randomization test does not require exante matching of clusters, is free of userchosen parameters, and performs well at conventional significance levels with as few as five clusters. The method tests standard (nonsharp) hypotheses and can even be asymptotically similar in empirically relevant situations. The main focus of the paper is inference on quantile treatment effects but the method applies more broadly. Numerical and empirical examples are provided. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.04687&r=ecm 
By:  Francesco Cordoni; Nicolas Doremus; Alessio Moneta 
Abstract:  We propose a statistical identification procedure for structural vector autoregressive (VAR) models that present a nonlinear dependence (at least) at the contemporaneous level. By applying and adapting results from the literature on causal discovery with continuous additive noise models to structural VAR analysis, we show that a large class of structural VAR models is identifiable. We spell out these specific conditions and propose a scheme for the estimation of structural impulse response functions in a nonlinear setting. We assess the performance of this scheme in a simulation experiment. Finally, we apply it in a study on the effects of monetary policy on the economy. 
Keywords:  Structural VAR models; Causal Discovery; Nonlinearity; Additive Noise Models; Impulse response functions. 
Date:  2023–01–27 
URL:  http://d.repec.org/n?u=RePEc:ssa:lemwps:2023/07&r=ecm 
By:  Bing Su; Fukang Zhu; Ke Zhu 
Abstract:  The spatial dependence in mean has been well studied by plenty of models in a large strand of literature, however, the investigation of spatial dependence in variance is lagging significantly behind. The existing models for the spatial dependence in variance are scarce, with neither probabilistic structure nor statistical inference procedure being explored. To circumvent this deficiency, this paper proposes a new generalized logarithmic spatial heteroscedasticity model with exogenous variables (denoted by the logSHE model) to study the spatial dependence in variance. For the logSHE model, its spatial nearepoch dependence (NED) property is investigated, and a systematic statistical inference procedure is provided, including the maximum likelihood and generalized method of moments estimators, the Wald, Lagrange multiplier and likelihoodratiotype D tests for model parameter constraints, and the overidentification test for the model diagnostic checking. Using the tool of spatial NED, the asymptotics of all proposed estimators and tests are established under regular conditions. The usefulness of the proposed methodology is illustrated by simulation results and a real data example on the house selling price. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.06658&r=ecm 
By:  Jonathan Proctor; Tamma Carleton; Sandy Sum 
Abstract:  Remotely sensed measurements and other machine learning predictions are increasingly used in place of direct observations in empirical analyses. Errors in such measures may bias parameter estimation, but it remains unclear how large such biases are or how to correct for them. We leverage a new benchmark dataset providing colocated ground truth observations and remotely sensed measurements for multiple variables across the contiguous U.S. to show that the common practice of using remotely sensed measurements without correction leads to biased parameter point estimates and standard errors across a diversity of empirical settings. More than threequarters of the 95% confidence intervals we estimate using remotely sensed measurements do not contain the true coefficient of interest. These biases result from both classical measurement error and more structured measurement error, which we find is common in machine learning based remotely sensed measurements. We show that multiple imputation, a standard statistical imputation technique so far untested in this setting, effectively reduces bias and improves statistical coverage with only minor reductions in power in both simple linear regression and panel fixed effects frameworks. Our results demonstrate that multiple imputation is a generalizable and easily implementable method for correcting parameter estimates relying on remotely sensed variables. 
JEL:  C18 C45 C80 Q0 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:30861&r=ecm 
By:  Hentschel, Manuel (University of Geneva); Engelke, Sebastian (University of Geneva); Segers, Johan (Université catholique de Louvain, LIDAM/ISBA, Belgium) 
Abstract:  The severity of multivariate extreme events is driven by the dependence between the largest marginal observations. The Hüsler–Reiss distribution is a versatile model for this extremal dependence, and it is usually parameterized by a variogram matrix. In order to represent conditional independence relations and obtain sparse parameterizations, we introduce the novel Hüsler–Reiss precision matrix. Similarly to the Gaussian case, this matrix appears naturally in density representations of the Hüsler–Reiss Pareto distribution and encodes the extremal graphical structure through its zero pattern. For a given, arbitrary graph we prove the existence and uniqueness of the completion of a partially specified Hüsler–Reiss variogram matrix so that its precision matrix has zeros on nonedges in the graph. Using suitable estimators for the parameters on the edges, our theory provides the first consistent estimator of graph structured Hüsler–Reiss distributions. If the graph is unknown, our method can be combined with recent structure learning algorithms to jointly infer the graph and the corresponding parameter matrix. Based on our methodology, we propose new tools for statistical inference of sparse Hüsler–Reiss models and illustrate them on large flight delay data in the U.S. 
Keywords:  Extreme value analysis ; multivariate generalized Pareto distribution ; sparsity ; variogram 
Date:  2022–10–27 
URL:  http://d.repec.org/n?u=RePEc:aiz:louvad:2022032&r=ecm 
By:  Jizhou Liu 
Abstract:  This paper studies inference in twostage randomized experiments with covariateadaptive randomization. Here, by a twostage randomized experiment, we mean one in which clusters (e.g., households, schools, or graph partitions) are first randomly assigned to different levels of treated fraction and then units within each treated clusters are randomly assigned to treatment or control according to its selected treated fraction; by covariateadaptive randomization, we mean randomization schemes that first stratify according to baseline covariates and then assign treatment status so as to achieve ``balance'' within each stratum. We study estimation and inference of this design under two different asymptotic regimes: ``small strata'' and ``large strata'', which enable us to study a wide range of commonly used designs from the empirical literature. We establish conditions under which our estimators are consistent and asymptotically normal and construct consistent estimators of their corresponding asymptotic variances. Combining these results establishes the asymptotic validity of tests based on these estimators. We argue that ignoring covariate information at the design stage can lead to efficiency loss, and commonly used inference methods that ignore or improperly use covariate information can lead to either conservative or invalid inference. Then, we apply our results to studying optimal use of covariate information in twostage designs, and show that a certain generalized matchedpair design achieves minimum asymptotic variance for each proposed estimator. A simulation study and empirical application confirm the practical relevance of our theoretical results. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.09016&r=ecm 
By:  Miren AzkarateAskasua; Miguel Zerecero 
Abstract:  Estimations of quadratic forms in the parameters of linear models exhibit smallsample bias. The direct computation for a bias correction is not feasible when the number of covariates is large. We propose a bootstrap method for correcting this bias that accommodates different assumptions on the structure of the error term including general heteroscedasticity and serial correlation. Our approach is suited to correct variance decompositions and the bias of multiple quadratic forms of the same linear model without increasing the computational cost. We show with Monte Carlo simulations that our bootstrap procedure is effective in correcting the bias and find that is faster than other methods in the literature. Using administrative data for France, we apply our method by carrying out a variance decomposition of a linear model of log wages with person and firm fixed effects. We find that the person and firm effects are less important in explaining the variance of log wages after correcting for the bias and depending on the specification their correlation becomes positive after the correction. 
Keywords:  Variance components, Many regressors, Fixed effects, Bias correction 
JEL:  C13 C23 C55 J30 J31 
Date:  2022–11 
URL:  http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2022_376&r=ecm 
By:  Beatrice Foroni; Luca Merlo; Lea Petrella 
Abstract:  In this paper we develop a linear expectile hidden Markov model for the analysis of cryptocurrency time series in a risk management framework. The methodology proposed allows to focus on extreme returns and describe their temporal evolution by introducing in the model timedependent coefficients evolving according to a latent discrete homogeneous Markov chain. As it is often used in the expectile literature, estimation of the model parameters is based on the asymmetric normal distribution. Maximum likelihood estimates are obtained via an ExpectationMaximization algorithm using efficient Mstep update formulas for all parameters. We evaluate the introduced method with both artificial data under several experimental settings and real data investigating the relationship between daily Bitcoin returns and major world market indices. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.09722&r=ecm 
By:  Chaohua Dong; Jiti Gao; Yundong Tu; Bin Peng 
Abstract:  Robust Mestimation uses loss functions, such as least absolute deviation (LAD), quantile loss and Huber's loss, to construct its objective function, in order to for example eschew the impact of outliers, whereas the difficulty in analysing the resultant estimators rests on the nonsmoothness of these losses. Generalized functions have advantages over ordinary functions in several aspects, especially generalized functions possess derivatives of any order. Generalized functions incorporate local integrable functions, the socalled regular generalized functions, while the socalled singular generalized functions (e.g. Dirac delta function) can be obtained as the limits of a sequence of sufficient smooth functions, socalled regular sequence in generalized function context. This makes it possible to use these singular generalized functions through approximation. Nevertheless, a significant contribution of this paper is to establish the convergence rate of regular sequence to nonsmooth loss that answers a call from the relevant literature. For parameter estimation where objective function may be nonsmooth, this paper first shows as a general paradigm that how generalized function approach can be used to tackle the nonsmooth loss functions in Section two using a very simple model. This approach is of general interest and applicability. We further use the approach in robust Mestimation for additive singleindex cointegrating time series models; the asymptotic theory is established for the proposed estimators. We evaluate the finitesample performance of the proposed estimation method and theory by both simulated data and an empirical analysis of predictive regression of stock returns. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.06631&r=ecm 
By:  Cristina Amado (NIPE/Center for Research in Economics and Management, University of Minho, Portugal; and CREATES and Aarhus University) 
Abstract:  Nonstationarity and outlying observations are commonly encountered in financial time series. It is thus expected that models are able to accommodate these stylized facts and the techniques used are suitable to specify such models. In this paper we relax the assumption of stationarity and consider the problem of detecting smooth changes in the unconditional variance in the presence of outliers. It is found by simulation that the misspecifi cation test for constancy of the unconditional variance in GARCH models can be severely adversely affected in the presence of additive outliers. An outlier robust specifi cation procedure is also proposed to mitigate the effects of outliers for building multiplicative timevarying volatility models. The outlier robust variant of the test is shown to perform better than the conventional test in terms of size and power. An application to commodity returns illustrates the usefulness of the robust specifi cation procedure. 
Keywords:  Conditional heteroskedasticity; Testing parameter constancy; Model specification; Timevarying unconditional variance; Outliers. 
JEL:  C12 C32 C51 C52 
Date:  2022 
URL:  http://d.repec.org/n?u=RePEc:nip:nipewp:11/2022&r=ecm 
By:  Sylvia Fr\"uhwirthSchnatter; Darjus Hosszejni; Hedibert Freitas Lopes 
Abstract:  Despite the popularity of factor models with sparse loading matrices, little attention has been given to formally address identifiability of these models beyond standard rotationbased identification such as the positive lower triangular (PLT) constraint. To fill this gap, we review the advantages of variance identification in sparse factor analysis and introduce the generalized lower triangular (GLT) structures. We show that the GLT assumption is an improvement over PLT without compromise: GLT is also unique but, unlike PLT, a nonrestrictive assumption. Furthermore, we provide a simple counting rule for variance identification under GLT structures, and we demonstrate that within this model class the unknown number of common factors can be recovered in an exploratory factor analysis. Our methodology is illustrated for simulated data in the context of postprocessing posterior draws in Bayesian sparse factor analysis. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.06354&r=ecm 
By:  Marcus Roller, Daniel Steinberg 
Abstract:  Numerous quasiexperimental identification strategies making use of the differencein differences setup suffer from multiple treatments which can be separated into sequential and simultaneous treatments. While for causal inferences under sequential treatments a staggered differenceindifferences approach might be applied, for causal inferences under simultaneous treatments the standard differencesindifferences approach is normally not applicable. Accordingly, we present an adjusted differencesindifferences identification strategy that can neutralize the effects of additional treatments implemented simultaneously through the definition and the specific composition of the control group and an amended common trend assumption. Even though the adjusted differenceindifferences strategy identifies the average treatment effect on the treated, we also show that the adjusted strategy is capable of identifying the average treatment effect under stronger common trend assumptions and the absence of interaction effects between the treatments. 
Keywords:  Econometrics, Semiparametric and Nonparametric Methods, Treatment Effect Models 
JEL:  C01 C14 C21 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:rdv:wpaper:credresearchpaper41&r=ecm 
By:  Mikihito Nishi 
Abstract:  In this study, we propose a test for the coefficient randomness in autoregressive models where the autoregressive coefficient is local to unity, which is empirically relevant given the results of earlier studies. Under this specification, we theoretically analyze the effect of the correlation between the random coefficient and disturbance on tests' properties, which remains largely unexplored in the literature. Our analysis reveals that the correlation crucially affects the power of tests for coefficient randomness and that tests proposed by earlier studies can perform poorly when the degree of the correlation is moderate to large. The test we propose in this paper is designed to have a power function robust to the correlation. Because the asymptotic null distribution of our test statistic depends on the correlation $\psi$ between the disturbance and its square as earlier tests do, we also propose a modified version of the test statistic such that its asymptotic null distribution is free from the nuisance parameter $\psi$. The modified test is shown to have better power properties than existing ones in large and finite samples. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.04853&r=ecm 
By:  Zhang, Xinyu; Tong, Howell 
Abstract:  Principal component analysis (PCA) is a most frequently used statistical tool in almost all branches of data science. However, like many other statistical tools, there is sometimes the risk of misuse or even abuse. In this paper, we highlight possible pitfalls in using the theoretical results of PCA based on the assumption of independent data when the data are time series. For the latter, we state with proof a central limit theorem of the eigenvalues and eigenvectors (loadings), give direct and bootstrap estimation of their asymptotic covariances, and assess their efficacy via simulation. Specifically, we pay attention to the proportion of variation, which decides the number of principal components (PCs), and the loadings, which help interpret the meaning of PCs. Our findings are that while the proportion of variation is quite robust to different dependence assumptions, the inference of PC loadings requires careful attention. We initiate and conclude our investigation with an empirical example on portfolio management, in which the PC loadings play a prominent role. It is given as a paradigm of correct usage of PCA for time series data. 
Keywords:  bootstrap; inference; limiting distribution; PCA; portfolio management; time series; 11771239; 71973077 
JEL:  C1 
Date:  2022–04–01 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:113566&r=ecm 
By:  St\'ephane Bonhomme; Kevin Dano; Bryan S. Graham 
Abstract:  We study identification in a binary choice panel data model with a single \emph{predetermined} binary covariate (i.e., a covariate sequentially exogenous conditional on lagged outcomes and covariates). The choice model is indexed by a scalar parameter $\theta$, whereas the distribution of unitspecific heterogeneity, as well as the feedback process that maps lagged outcomes into future covariate realizations, are left unrestricted. We provide a simple condition under which $\theta$ is never pointidentified, no matter the number of time periods available. This condition is satisfied in most models, including the logit one. We also characterize the identified set of $\theta$ and show how to compute it using linear programming techniques. While $\theta$ is not generally pointidentified, its identified set is informative in the examples we analyze numerically, suggesting that meaningful learning about $\theta$ is possible even in short panels with feedback. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.05733&r=ecm 
By:  Tom Boot; Art\=uras Juodis 
Abstract:  It is customary to estimate errorinvariables models using higherorder moments of observables. This momentsbased estimator is consistent only when the coefficient of the latent regressor is assumed to be nonzero. We develop a new estimator based on the divideandconquer principle that is consistent for any value of the coefficient of the latent regressor. In an application on the relation between investment, (mismeasured) Tobin's $q$ and cash flow, we find time periods in which the effect of Tobin's $q$ is not statistically different from zero. The implausibly large higherorder moment estimates in these periods disappear when using the proposed estimator. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.04439&r=ecm 
By:  Arthur Charpentier; Emmanuel Flachaire; Ewen Gallic 
Abstract:  Many problems ask a question that can be formulated as a causal question: "what would have happened if...?" For example, "would the person have had surgery if he or she had been Black?" To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by x, a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of x. We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable we are trying to quantify the effect of. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black). 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.07755&r=ecm 
By:  Mastromarco, Camilla; Simar, Léopold (Université catholique de Louvain, LIDAM/ISBA, Belgium); Van Keilegom, Ingrid 
Abstract:  In production theory, conditional frontiers and conditional efficiency measures are a flexible and appealing approach to consider the role of environmental variables on the production process. Direct approaches estimate nonparametrically conditional distribution functions requiring smoothing techniques and the use of selected bandwidths. The statistical literature produces way to derive bandwidths of optimal order, by using e.g. leastsquarescrossvalidation techniques. However, it has been shown that the resulting order may not be optimal when estimating the boundary of the distribution function. As a consequence the direct approaches may suffer from some statistical instability. In this paper we suggest a full nonparametric approach which avoids the problem of estimating these bandwidths, by eliminating in a first step the influence of the environmental factors on the inputs and the outputs. By doing this we produce “pure” inputs and outputs which allow to estimate a “pure” measure of efficiency, more reliable for ranking the firms, since the influence of the external factors have been eliminated. This can be viewed as an extension of the use of locationscale models (implying some semiparametric structure) to full nonparametric models, based on nonseparable, nonparametric models. We are also able to recover the frontier and efficiencies in original units. We describe the method, its statistical properties and we show in some MonteCarlo simulations, how our new method dominates the traditional direct approach. 
Keywords:  Nonparametric frontier models ; Environmental factors ; Conditional efficiency ; Robust estimation of frontiers 
JEL:  C13 C14 C49 
Date:  2022–11–01 
URL:  http://d.repec.org/n?u=RePEc:aiz:louvad:2022035&r=ecm 
By:  Michael Dueker; Laura E. Jackson; Michael T. Owyang; Martin Sola 
Abstract:  Smoothtransition autoregressive (STAR) models, competitors of Markovswitching models, are limited by an assumed timeinvariant threshold level. We augment the STAR model with a timevarying threshold that can be interpreted as a “tipping level” where the mean and dynamics of the VAR shift. Thus, the timevarying latent threshold level serves as a demarcation between regimes. We show how to estimate the model in a Bayesian framework using a Metropolis step and an unscented Kalman filter proposal. To show how allowing time variation in the threshold can affect the results, we present two applications: a model of the natural rate of unemployment and a model of regimedependent government spending. 
Keywords:  Regime switching, smoothtransition autoregressive model, unemployment, nonlinear models. 
JEL:  C22 E31 G12 
Date:  2022–12 
URL:  http://d.repec.org/n?u=RePEc:udt:wpecon:2022_04&r=ecm 
By:  Kenichiro Shiraya; Tomohisa Yamakami 
Abstract:  Copulas are used to construct joint distributions in many areas. In some problems, it is necessary to deal with correlation structures that are more complicated than the commonly known copulas. A finite order multivariate Hermite polynomial expansion, as an approximation of a joint density function, can handle complex correlation structures. However, it does not construct copulas because the density function can take negative values. In this study, we propose a method to construct a copula based on the finite sum of multivariate Hermite polynomial expansions by applying corrections to the joint density function. Furthermore, we apply this copula to estimate the volatility smile of cross currency pairs in the foreign exchange option market. This method can easily reproduce the volatility smile of cross currency pairs by appropriately adjusting the parameters and following the daily volatility fluctuations even if the higherorder parameters are fixed. In the numerical experiments, we compare the estimation results of the volatility smile of EURJPY with those of USDJPY and EURUSD for the proposed and other copulas, and show the validity of the proposed copula. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.10044&r=ecm 
By:  Marco Duarte; Lorenzo Magnolfi; Mikkel S{\o}lvsten; Christopher Sullivan 
Abstract:  Evaluating policy in imperfectly competitive markets requires understanding firm behavior. While researchers test conduct via model selection and assessment, we present advantages of Rivers and Vuong (2002) (RV) model selection under misspecification. However, degeneracy of RV invalidates inference. With a novel definition of weak instruments for testing, we connect degeneracy to instrument strength, derive weak instrument properties of RV, and provide a diagnostic for weak instruments by extending the framework of Stock and Yogo (2005) to model selection. We test vertical conduct (VillasBoas, 2007) using common instrument sets. Some are weak, providing no power. Strong instruments support manufacturers setting retail prices. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.06720&r=ecm 
By:  Matias D. Cattaneo; Nicolas Idrobo; Rocio Titiunik 
Abstract:  This monograph, together with its accompanying first part Cattaneo, Idrobo and Titiunik (2020), collects and expands the instructional materials we prepared for more than $40$ short courses and workshops on Regression Discontinuity (RD) methodology that we taught between 2014 and 2022. In this second monograph, we discuss several topics in RD methodology that build on and extend the analysis of RD designs introduced in Cattaneo, Idrobo and Titiunik (2020). Our first goal is to present an alternative RD conceptual framework based on local randomization ideas. This methodological approach can be useful in RD designs with discretelyvalued scores, and can also be used more broadly as a complement to the continuitybased approach in other settings. Then, employing both continuitybased and local randomization approaches, we extend the canonical Sharp RD design in multiple directions: fuzzy RD designs, RD designs with discrete scores, and multidimensional RD designs. The goal of our twopart monograph is purposely practical and hence we focus on the empirical analysis of RD designs. 
Date:  2023–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2301.08958&r=ecm 
By:  Karlson, Kristian Bernt; Ben Jann 
Abstract:  As sociologists are increasingly turning away from using odds ratios, reporting average marginal effects is becoming more popular. We aim to restore the use of odds ratios in sociological research by introducing marginal odds ratios. Unlike conventional odds ratios, marginal odds ratios are not affected by omitted covariates in arbitrary ways. Marginal odds ratios thus behave like average marginal effects but retain the relative effect interpretation of the odds ratio. We argue that marginal odds ratios are well suited for much sociological inquiry and should be reported as a complement to the reporting of average marginal effects. We define marginal odds ratios in terms of potential outcomes, show their close relationship to average marginal effects, and discuss their potential advantages over conventional odds ratios. We also briefly discuss how to estimate marginal odds ratios and present examples comparing marginal odds ratios to conventional odds ratios and average marginal effects. 
Keywords:  odds ratio, logit, logistic model, regression, marginal effects, average marginal effects, confounding, mediation 
JEL:  C01 C25 
Date:  2023–01–31 
URL:  http://d.repec.org/n?u=RePEc:bss:wpaper:45&r=ecm 
By:  Hafner, Christian (Université catholique de Louvain, LIDAM/ISBA, Belgium); Herwartz, Helmut (University of Göttingen) 
Abstract:  Volatility impulse response functions (VIRFs) have been introduced to unravel the effects of shocks on (co)variances for the case of classical multivariate GARCH specifications. This paper proposes generalized VIRFs for the case of asymmetric specifications which capture stylized features such as the leverage effect. In a bivariate application comprising a global equity index and gold prices, we show that generalized VIRFs can be used to reassess the role of gold as a safehaven asset. 
Keywords:  Multivariate GARCH ; leverage effect ; volatility impulse response analysis ; safe haven 
JEL:  C32 G15 
Date:  2022–11–18 
URL:  http://d.repec.org/n?u=RePEc:aiz:louvad:2022037&r=ecm 