|
on Econometrics |
By: | Patrick Ding; Guido Imbens; Zhaonan Qu; Yinyu Ye |
Abstract: | Probit models are useful for modeling correlated discrete responses in many disciplines, including discrete choice data in economics. However, the Gaussian latent variable feature of probit models coupled with identification constraints pose significant computational challenges for its estimation and inference, especially when the dimension of the discrete response variable is large. In this paper, we propose a computationally efficient Expectation-Maximization (EM) algorithm for estimating large probit models. Our work is distinct from existing methods in two important aspects. First, instead of simulation or sampling methods, we apply and customize expectation propagation (EP), a deterministic method originally proposed for approximate Bayesian inference, to estimate moments of the truncated multivariate normal (TMVN) in the E (expectation) step. Second, we take advantage of a symmetric identification condition to transform the constrained optimization problem in the M (maximization) step into a one-dimensional problem, which is solved efficiently using Newton's method instead of off-the-shelf solvers. Our method enables the analysis of correlated choice data in the presence of more than 100 alternatives, which is a reasonable size in modern applications, such as online shopping and booking platforms, but has been difficult in practice with probit models. We apply our probit estimation method to study ordering effects in hotel search results on Expedia.com. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.09371 |
By: | Undral Byambadalai; Tatsushi Oka; Shota Yasui |
Abstract: | We propose a novel regression adjustment method designed for estimating distributional treatment effect parameters in randomized experiments. Randomized experiments have been extensively used to estimate treatment effects in various scientific fields. However, to gain deeper insights, it is essential to estimate distributional treatment effects rather than relying solely on average effects. Our approach incorporates pre-treatment covariates into a distributional regression framework, utilizing machine learning techniques to improve the precision of distributional treatment effect estimators. The proposed approach can be readily implemented with off-the-shelf machine learning methods and remains valid as long as the nuisance components are reasonably well estimated. Also, we establish the asymptotic properties of the proposed estimator and present a uniformly valid inference method. Through simulation results and real data analysis, we demonstrate the effectiveness of integrating machine learning techniques in reducing the variance of distributional treatment effect estimators in finite samples. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.16037 |
By: | Matias D. Cattaneo; Richard K. Crump; Max H. Farrell; Yingjie Feng |
Abstract: | Binned scatter plots are a powerful statistical tool for empirical work in the social, behavioral, and biomedical sciences. Available methods rely on a quantile-based partitioning estimator of the conditional mean regression function to primarily construct flexible yet interpretable visualization methods, but they can also be used to estimate treatment effects, assess uncertainty, and test substantive domain-specific hypotheses. This paper introduces novel binscatter methods based on nonlinear, possibly nonsmooth M-estimation methods, covering generalized linear, robust, and quantile regression models. We provide a host of theoretical results and practical tools for local constant estimation along with piecewise polynomial and spline approximations, including (i) optimal tuning parameter (number of bins) selection, (ii) confidence bands, and (iii) formal statistical tests regarding functional form or shape restrictions. Our main results rely on novel strong approximations for general partitioning-based estimators covering random, data-driven partitions, which may be of independent interest. We demonstrate our methods with an empirical application studying the relation between the percentage of individuals without health insurance and per capita income at the zip-code level. We provide general-purpose software packages implementing our methods in Python, R, and Stata. |
Keywords: | partition-based semi-linear estimators; Linear models; quantile regression; robust bias correction; uniform inference; binning selection; treatment effect estimation |
JEL: | C14 C18 C21 |
Date: | 2024–08–01 |
URL: | https://d.repec.org/n?u=RePEc:fip:fednsr:98622 |
By: | Katerina Chrysikou; George Kapetanios |
Abstract: | In this paper we examine the existence of heterogeneity within a group, in panels with latent grouping structure. The assumption of within group homogeneity is prevalent in this literature, implying that the formation of groups alleviates cross-sectional heterogeneity, regardless of the prior knowledge of groups. While the latter hypothesis makes inference powerful, it can be often restrictive. We allow for models with richer heterogeneity that can be found both in the cross-section and within a group, without imposing the simple assumption that all groups must be heterogeneous. We further contribute to the method proposed by \cite{su2016identifying}, by showing that the model parameters can be consistently estimated and the groups, while unknown, can be identifiable in the presence of different types of heterogeneity. Within the same framework we consider the validity of assuming both cross-sectional and within group homogeneity, using testing procedures. Simulations demonstrate good finite-sample performance of the approach in both classification and estimation, while empirical applications across several datasets provide evidence of multiple clusters, as well as reject the hypothesis of within group homogeneity. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.19509 |
By: | Malte Londschien; Peter B\"uhlmann |
Abstract: | We propose a weak-instrument-robust subvector Lagrange multiplier test for instrumental variables regression. We show that it is asymptotically size-correct under a technical condition. This is the first weak-instrument-robust subvector test for instrumental variables regression to recover the degrees of freedom of the commonly used Wald test, which is not robust to weak instruments. Additionally, we provide a closed-form solution for subvector confidence sets obtained by inverting the subvector Anderson-Rubin test. We show that they are centered around a k-class estimator. Also, we show that the subvector confidence sets for single coefficients of the causal parameter are jointly bounded if and only if Anderson's likelihood-ratio test rejects the hypothesis that the first-stage regression parameter is of reduced rank, that is, that the causal parameter is not identified. Finally, we show that if a confidence set obtained by inverting the Anderson-Rubin test is bounded and nonempty, it is equal to a Wald-based confidence set with a data-dependent confidence level. We explicitly compute this Wald-based confidence test. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.15256 |
By: | Zhengyu Zhang; Zequn Jin; Lihua Lin |
Abstract: | This paper proposes a new class of distributional causal quantities, referred to as the \textit{outcome conditioned partial policy effects} (OCPPEs), to measure the \textit{average} effect of a general counterfactual intervention of a target covariate on the individuals in different quantile ranges of the outcome distribution. The OCPPE approach is valuable in several aspects: (i) Unlike the unconditional quantile partial effect (UQPE) that is not $\sqrt{n}$-estimable, an OCPPE is $\sqrt{n}$-estimable. Analysts can use it to capture heterogeneity across the unconditional distribution of $Y$ as well as obtain accurate estimation of the aggregated effect at the upper and lower tails of $Y$. (ii) The semiparametric efficiency bound for an OCPPE is explicitly derived. (iii) We propose an efficient debiased estimator for OCPPE, and provide feasible uniform inference procedures for the OCPPE process. (iv) The efficient doubly robust score for an OCPPE can be used to optimize infinitesimal nudges to a continuous treatment by maximizing a quantile specific Empirical Welfare function. We illustrate the method by analyzing how anti-smoking policies impact low percentiles of live infants' birthweights. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.16950 |
By: | Zhang, Siliang; Chen, Yunxiao |
Abstract: | The Ising model has become a popular psychometric model for analyzing item response data. The statistical inference of the Ising model is typically carried out via a pseudo-likelihood, as the standard likelihood approach suffers from a high computational cost when there are many variables (i.e., items). Unfortunately, the presence of missing values can hinder the use of pseudo-likelihood, and a listwise deletion approach for missing data treatment may introduce a substantial bias into the estimation and sometimes yield misleading interpretations. This paper proposes a conditional Bayesian framework for Ising network analysis with missing data, which integrates a pseudo-likelihood approach with iterative data imputation. An asymptotic theory is established for the method. Furthermore, a computationally efficient Pólya–Gamma data augmentation procedure is proposed to streamline the sampling of model parameters. The method’s performance is shown through simulations and a real-world application to data on major depressive and generalized anxiety disorders from the National Epidemiological Survey on Alcohol and Related Conditions (NESARC). |
Keywords: | Ising model; iterative imputation; full conditional specification; network psychometrics; mental health disorders; major depressive disorder; generalized anxiety disorder; Springer deal |
JEL: | C1 |
Date: | 2024–07–06 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:123984 |
By: | Shi, Chengchun; Zhou, Yunzhe; Li, Lexin |
Abstract: | In this article, we propose a new hypothesis testing method for directed acyclic graph (DAG). While there is a rich class of DAG estimation methods, there is a relative paucity of DAG inference solutions. Moreover, the existing methods often impose some specific model structures such as linear models or additive models, and assume independent data observations. Our proposed test instead allows the associations among the random variables to be nonlinear and the data to be time-dependent. We build the test based on some highly flexible neural networks learners. We establish the asymptotic guarantees of the test, while allowing either the number of subjects or the number of time points for each subject to diverge to infinity. We demonstrate the efficacy of the test through simulations and a brain connectivity network analysis. Supplementary materials for this article are available online. |
Keywords: | brain connectivity networks; directed acrylic graph; hypothesis testing; generative adversarial networks; multilayer perceptron neural networks; Hypothesis testing; CIF-2102227; R01AG061303; R01AG062542; EP/W014971/1 |
JEL: | C1 |
Date: | 2023–07–12 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:119446 |
By: | Florian Huber; Gary Koop; Massimiliano Marcellino; Tobias Scheckel |
Abstract: | Commonly used priors for Vector Autoregressions (VARs) induce shrinkage on the autoregressive coefficients. Introducing shrinkage on the error covariance matrix is sometimes done but, in the vast majority of cases, without considering the network structure of the shocks and by placing the prior on the lower Cholesky factor of the precision matrix. In this paper, we propose a prior on the VAR error precision matrix directly. Our prior, which resembles a standard spike and slab prior, models variable inclusion probabilities through a stochastic block model that clusters shocks into groups. Within groups, the probability of having relations across group members is higher (inducing less sparsity) whereas relations across groups imply a lower probability that members of each group are conditionally related. We show in simulations that our approach recovers the true network structure well. Using a US macroeconomic data set, we illustrate how our approach can be used to cluster shocks together and that this feature leads to improved density forecasts. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.16349 |
By: | Susana Campos-Martins (NIPE/Center for Research in Economics and Management, University of Minho; and Católica Lisbon School of Business & Economics); Cristina Amado (NIPE/Center for Research in Economics and Management, University of Minho, Portugal) |
Abstract: | In this paper we propose a multivariate generalisation of the multiplicative decomposition of the volatility within the class of conditional correlation GARCH models. The GARCH variance equations are multiplicatively decomposed into a deterministic nonstationary component describing the long-run movements in volatility and a short-run dynamic component allowing for volatility spillover effects across markets or assets. The conditional correlations are assumed to be time-invariant in its simplest form or generalised into a flexible dynamic parameterisation. Parameters of the model are estimated equation-by-equation by maximum likelihood applying the maximisation by parts algorithm to the variance equations, and thereafter to the structure of conditional correlations. An empirical application using carbon markets data illustrates the usefulness of the model. Our results suggest that, after modelling the variance equations accordingly, we find evidence that the transmission mechanism of shocks persists which is supported by the presence of variance interactions robust to nonstationarity. |
Keywords: | Variance interactions; Nonstationarity; Short- and long-term volatility; Lagrange multiplier test. |
JEL: | C12 C13 C32 C51 |
Date: | 2023 |
URL: | https://d.repec.org/n?u=RePEc:nip:nipewp:13/2023 |
By: | Yong Li (Renmin University of China); Sushanta K. Mallick; Nianling Wang (Capital University of Economics and Business); Jun Yu (University of Macau); Tao Zeng (Zhejiang University) |
Abstract: | This paper gives a rigorous justification to the Deviance information criterion (DIC), which has been extensively used for model selection based on MCMC output. It is shown that, when a plug-in predictive distribution is used and under a set of regularity conditions, DIC is an asymptotically unbiased estimator of the expected Kullback-Leibler divergence between the data generating process and the plug-in predictive distribution. High-order expansions to DIC and the effective number of parameters are developed, facilitating investigating the effect of the prior. DIC is used to compare alternative discrete-choice models, stochastic frontier models, and copula models in three empirical applications. |
Keywords: | AIC; DIC; Expected loss function; Kullback-Leibler divergence; Model comparison; Plug-in predictive distribution |
JEL: | C11 C52 C25 C22 C32 |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:boa:wpaper:202415 |
By: | Li, Jie; Fearnhead, Paul; Fryzlewicz, Piotr; Wang, Tengyao |
Abstract: | Detecting change points in data is challenging because of the range of possible types of change and types of behaviour of data when there is no change. Statistically efficient methods for detecting a change will depend on both of these features, and it can be difficult for a practitioner to develop an appropriate detection method for their application of interest. We show how to automatically generate new offline detection methods based on training a neural network. Our approach is motivated by many existing tests for the presence of a change point being representable by a simple neural network, and thus a neural network trained with sufficient data should have performance at least as good as these methods. We present theory that quantifies the error rate for such an approach, and how it depends on the amount of training data. Empirical results show that, even with limited training data, its performance is competitive with the standard cumulative sum (CUSUM) based classifier for detecting a change in mean when the noise is independent and Gaussian, and can substantially outperform it in the presence of auto-correlated or heavy-tailed noise. Our method also shows strong results in detecting and localizing changes in activity based on accelerometer data. |
Keywords: | automatic statistician; classification; likelihood-free inference; neural networks; structural breaks; supervised learning; e High End Computing Cluster at Lancaster University; and EPSRC grants EP/V053590/1; EP/V053639/1 and EP/T02772X/1 |
JEL: | C1 |
Date: | 2024–04–01 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:120083 |
By: | Martin Bruns; Helmut Lütkepohl; James McNeil |
Abstract: | The shocks in structural vector autoregressive (VAR) analysis are typically assumed to be instantaneously uncorrelated. This condition may easily be violated in proxy VAR models if more than one shock is identified by a proxy variable. Correlated shocks may be obtained even if the proxies are uncorrelated and satisfy the usual relevance and exogeneity conditions individually. Examples from the recent proxy VAR literature are presented. It is shown that assuming uncorrelated proxies that satisfy the usual relevance and exogeneity conditions individually actually over-identifies the shocks of interest and a Generalized Method of Moments (GMM) algorithm is proposed that ensures orthogonal shocks and provides efficient estimators of the structural parameters. It generalizes an earlier GMM proposal that works only if at least K − 1 shocks are identified by proxies in a VAR with K variables. |
Keywords: | Structural vector autoregression, proxy VAR, external instruments, correlated shocks, Generalized Method of Moments |
JEL: | C32 C36 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:diw:diwwpp:dp2095 |
By: | Yiqing Xu; Anqi Zhao; Peng Ding |
Abstract: | In many social science applications, researchers use the difference-in-differences (DID) estimator to establish causal relationships, exploiting cross-sectional variation in a baseline factor and temporal variation in exposure to an event that presumably may affect all units. This approach, often referred to as generalized DID (GDID), differs from canonical DID in that it lacks a "clean control group" unexposed to the event after the event occurs. In this paper, we clarify GDID as a research design in terms of its data structure, feasible estimands, and identifying assumptions that allow the DID estimator to recover these estimands. We frame GDID as a factorial design with two factors: the baseline factor, denoted by $G$, and the exposure level to the event, denoted by $Z$, and define effect modification and causal interaction as the associative and causal effects of $G$ on the effect of $Z$, respectively. We show that under the canonical no anticipation and parallel trends assumptions, the DID estimator identifies only the effect modification of $G$ in GDID, and propose an additional generalized parallel trends assumption to identify causal interaction. Moreover, we show that the canonical DID research design can be framed as a special case of the GDID research design with an additional exclusion restriction assumption, thereby reconciling the two approaches. We illustrate these findings with empirical examples from economics and political science, and provide recommendations for improving practice and interpretation under GDID. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.11937 |
By: | Astill, Sam; Harvey, David I; Leybourne, Stephen J; Taylor, AM Robert |
Abstract: | The Bonferroni Q test of Campbell and Yogo (2006) is widely used in empirical studies investigating predictability in asset returns by strongly persistent and endogenous predictors. Its formulation, however, only allows for a constant mean in the predictor, seemingly at odds with many of the predictors used in practice. We establish the asymptotic size and local power properties of the Q test, and the corresponding Bonferroni t-test of Cavanagh, Elliott and Stock (1995), as operationalised for the constant mean case by Campbell and Yogo (2006), under a local-to-zero specification for a linear trend in the predictor, revealing that size and power depends on the magnitude of the trend for both. To rectify this we develop with-trend variants of the operational Bonferroni Q and t tests. However, where a trend is not present in the predictor we show that these tests lose (both finite sample and asymptotic local) power relative to the extant constant-only versions of the tests. In practice uncertainty will necessarily exist over whether a linear trend is genuinely present in the predictor or not. To deal with this, we also develop hybrid tests based on unionof- rejections and switching mechanisms to capitalise on the relative power advantages of the constant-only tests when a trend is absent (or very weak) and the with-trend tests otherwise. A further extension allows use of a conventional t-test where the predictor appears to be weakly persistent. We show that, overall, our recommended hybrid test can offer excellent size and power properties regardless of whether or not a linear trend is present in the predictor, or the predictor’s degrees of persistence and endogeneity. An empirical application to an updated Welch and Goyal (2008) dataset illustrates the practical relevance of our new approach. |
Keywords: | predictive regression; linear trend; unknown regressor persistence; Bonferroni tests; hybrid tests; union of rejections |
Date: | 2024–08–12 |
URL: | https://d.repec.org/n?u=RePEc:esy:uefcwp:38947 |
By: | Agnes Norris Keiller; Áureo de Paula; John Van Reenen |
Abstract: | Standard methods for estimating production functions in the Olley and Pakes (1996) tradition require assumptions on input choices. We introduce a new method that exploits (increasingly available) data on a firm’s expectations of its future output and inputs that allows us to obtain consistent production function parameter estimates while relaxing these input demand assumptions. In contrast to dynamic panel methods, our proposed estimator can be implemented on very short panels (including a single cross-section), and Monte Carlo simulations show it outperforms alternative estimators when firms’ material input choices are subject to optimization error. Implementing a range of production function estimators on UK data, we find our proposed estimator yields results that are either similar to or more credible than commonly-used alternatives. These differences are larger in industries where material inputs appear harder to optimize. We show that TFP implied by our proposed estimator is more strongly associated with future jobs growth than existing methods, suggesting that failing to adequately account for input endogeneity may underestimate the degree of dynamic reallocation in the economy. |
JEL: | C21 C23 L11 L23 O31 |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:32725 |
By: | Shuze Chen; David Simchi-Levi; Chonghuan Wang |
Abstract: | As service systems grow increasingly complex and dynamic, many interventions become localized, available and taking effect only in specific states. This paper investigates experiments with local treatments on a widely-used class of dynamic models, Markov Decision Processes (MDPs). Particularly, we focus on utilizing the local structure to improve the inference efficiency of the average treatment effect. We begin by demonstrating the efficiency of classical inference methods, including model-based estimation and temporal difference learning under a fixed policy, as well as classical A/B testing with general treatments. We then introduce a variance reduction technique that exploits the local treatment structure by sharing information for states unaffected by the treatment policy. Our new estimator effectively overcomes the variance lower bound for general treatments while matching the more stringent lower bound incorporating the local treatment structure. Furthermore, our estimator can optimally achieve a linear reduction with the number of test arms for a major part of the variance. Finally, we explore scenarios with perfect knowledge of the control arm and design estimators that further improve inference efficiency. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.19618 |
By: | Gabriel Montes-Rojas; Zacharias Psaradakis; Martín Sola |
Abstract: | We consider models for conditional quantiles in which parameters are subject to discrete changes governed by an exogenous, unobservable Markov chain. We argue that all quantiles of the conditional distribution of the response variable should share the Markov regimes. This gives an unambiguous classification of regimes and allows the capture of quantile-specific characteristics conditionally on the hidden regimes. The potential of our approach is illustrated using a quantile autoregression for U.S. inflation. |
Keywords: | Markov Switching; Quantile Regressions. |
JEL: | C32 C52 C58 |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:udt:wpecon:2024_05 |
By: | Jason R. Blevins |
Abstract: | Continuous-time formulations of dynamic discrete choice games offer notable computational advantages, particularly in modeling strategic interactions in oligopolistic markets. This paper extends these benefits by addressing computational challenges in order to improve model solution and estimation. We first establish new results on the rates of convergence of the value iteration, policy evaluation, and relative value iteration operators in the model, holding fixed player beliefs. Next, we introduce a new representation of the value function in the model based on uniformization -- a technique used in the analysis of continuous time Markov chains -- which allows us to draw a direct analogy to discrete time models. Furthermore, we show that uniformization also leads to a stable method to compute the matrix exponential, an operator appearing in the model's log likelihood function when only discrete time "snapshot" data are available. We also develop a new algorithm that concurrently computes the matrix exponential and its derivatives with respect to model parameters, enhancing computational efficiency. By leveraging the inherent sparsity of the model's intensity matrix, combined with sparse matrix techniques and precomputed addresses, we show how to significantly speed up computations. These strategies allow researchers to estimate more sophisticated and realistic models of strategic interactions and policy impacts in empirical industrial organization. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.14914 |
By: | Devang Sinha; Siddhartha P. Chakrabarty |
Abstract: | In this paper, we examine the Sample Average Approximation (SAA) procedure within a framework where the Monte Carlo estimator of the expectation is biased. We also introduce Multilevel Monte Carlo (MLMC) in the SAA setup to enhance the computational efficiency of solving optimization problems. In this context, we conduct a thorough analysis, exploiting Cram\'er's large deviation theory, to establish uniform convergence, quantify the convergence rate, and determine the sample complexity for both standard Monte Carlo and MLMC paradigms. Additionally, we perform a root-mean-squared error analysis utilizing tools from empirical process theory to derive sample complexity without relying on the finite moment condition typically required for uniform convergence results. Finally, we validate our findings and demonstrate the advantages of the MLMC estimator through numerical examples, estimating Conditional Value-at-Risk (CVaR) in the Geometric Brownian Motion and nested expectation framework. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.18504 |
By: | Jaeho Kim (Sogang University); Scott C. Linn (University of Oklahoma); Sora Chon (Inha University) |
Abstract: | We demonstrate the superior performance of the price discovery measure recently developed by Kim and Linn (2022), termed the Long-run Forecast Share (LFS). Our examination involves a comparison of LFS with existing measures and highlights its wide applicability across various data generating processes. Recent studies, such as Shen et al. (2024) and Lautier et al. (2024), have overlooked reporting the uncertainty arising from finite sample estimation of price discovery measures. Our empirical investigation reveals that estimation uncertainty is significant in many cases, highlighting the importance of accurately quantifying this uncertainty. We introduce a novel approach for implementing the calculation of LFS based on its structural interpretation and demonstrate how our method allows quantification of the uncertainty associated with the measure. Our primary conclusions are based upon extensive simulation experiments across numerous data generating processes. We also present an in-depth investigation of price discovery in the spot and futures markets for key metal and energy commodities and find that LFS provides consistent conclusions across a variety of assumptions. |
Keywords: | Price discovery, Futures and spot prices, Cointegration, Beveridge-Nelson decomposition |
JEL: | C11 C32 C58 G14 |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:inh:wpaper:2024-2 |
By: | Anders Bredahl Kock; David Preinerstorfer |
Abstract: | Tests based on the $2$- and $\infty$-norm have received considerable attention in high-dimensional testing problems, as they are powerful against dense and sparse alternatives, respectively. The power enhancement principle of Fan et al. (2015) combines these two norms to construct tests that are powerful against both types of alternatives. Nevertheless, the $2$- and $\infty$-norm are just two out of the whole spectrum of $p$-norms that one can base a test on. In the context of testing whether a candidate parameter satisfies a large number of moment equalities, we construct a test that harnesses the strength of all $p$-norms with $p\in[2, \infty]$. As a result, this test consistent against strictly more alternatives than any test based on a single $p$-norm. In particular, our test is consistent against more alternatives than tests based on the $2$- and $\infty$-norm, which is what most implementations of the power enhancement principle target. We illustrate the scope of our general results by using them to construct a test that simultaneously dominates the Anderson-Rubin test (based on $p=2$) and tests based on the $\infty$-norm in terms of consistency in the linear instrumental variable model with many (weak) instruments. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.17888 |