|
on Econometrics |
By: | Wayne Yuan Gao; Rui Wang |
Abstract: | We show that endogenous linear regression models can be identified without excluded instrumental variables, based on the standard mean independence condition and a no-multicollinearity condition on the conditional expectations of endogenous covariates given the included exogenous covariates. Based on the identification results, we propose two semiparametric estimators as well as a discretization-based estimator that does not require any nonparametric regressions. We establish their asymptotic normality, provide corresponding variance estimators, and demonstrate via simulations the good finite-sample performances of our proposed estimation and inference procedures. In particular, we find that the discretization-based estimator performs remarkably well in finite samples, while being very simple and fast to compute. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.00626&r=ecm |
By: | Taoufik Bouezmarni (Universite de Sherbrooke); Mohamed Doukali (School of Economics, University of East Anglia); Abderrahim Taamouti (University of Liverpool) |
Abstract: | This paper aims to derive a consistent test of Granger causality at a given expectile. We also propose a sup Wald test for jointly testing Granger causality at all expectiles that has the correct asymptotic size and power properties. Expectiles have the advantage of capturing similar information as quantiles, but they also have the merit of being much more straightforward to use than quantiles, since they are defined as least squares analogue of quantiles. Studying Granger causality in expectiles is practically simpler and allows us to examine the causality at all levels of the conditional distribution. Moreover, testing Granger causality at all expectiles provides a su¢ cient condition for testing Granger causality in distribution. A Monte Carlo simulation study reveals that our tests have good finite-sample size and power properties for a variety of data-generating processes and di¤erent sample sizes. Finally, we provide two empirical applications to illustrate the usefulness of the proposed tests. |
Keywords: | Granger causality in expectiles, Granger causality in distribution, expectile regression function, asymmetric loss function, sup-Wald test. |
JEL: | C12 C22 |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:uea:ueaeco:2023-02&r=ecm |
By: | Yukun Man; Pedro H. C. Sant'Anna; Yuya Sasaki; Takuya Ura |
Abstract: | In this paper, we derive a new class of doubly robust estimators for treatment effect estimands that is also robust against weak covariate overlap. Our proposed estimator relies on trimming observations with extreme propensity scores and uses a bias correction device for trimming bias. Our framework accommodates many research designs, such as unconfoundedness, local treatment effects, and difference-in-differences. Simulation exercises illustrate that our proposed tools indeed have attractive finite sample properties, which are aligned with our theoretical asymptotic results. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.08974&r=ecm |
By: | Jackson Bunting; Takuya Ura |
Abstract: | Many estimators of dynamic discrete choice models with permanent unobserved heterogeneity have desirable statistical properties but may be computationally intensive. In this paper we propose a method to quicken estimation for a broad class of dynamic discrete choice problems by exploiting index sufficiency. Index sufficiency implies a set of equality constraints which restrict the structural parameter of interest to belong in a subspace of the parameter space. We propose an estimator that uses the equality constraints, and show it is asymptotically equivalent to the unconstrained, computationally heavy estimator. Since the computational gains of our proposed estimator are due to the restriction of the parameter space to the subspace satisfying the equality constraints, we provide a series of results on the dimension of this subspace. Finally, we demonstrate the advantages of our approach by estimating a dynamic model of the U.K. fast food market. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.02171&r=ecm |
By: | Benoit Oriol; Alexandre Miot |
Abstract: | This work addresses large dimensional covariance matrix estimation with unknown mean. The empirical covariance estimator fails when dimension and number of samples are proportional and tend to infinity, settings known as Kolmogorov asymptotics. When the mean is known, Ledoit and Wolf (2004) proposed a linear shrinkage estimator and proved its convergence under those asymptotics. To the best of our knowledge, no formal proof has been proposed when the mean is unknown. To address this issue, we propose a new estimator and prove its quadratic convergence under the Ledoit and Wolf assumptions. Finally, we show empirically that it outperforms other standard estimators. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.07045&r=ecm |
By: | Degui Li; Runze Li; Han Lin Shang |
Abstract: | In this paper, we consider detecting and estimating breaks in heterogeneous mean functions of high-dimensional functional time series which are allowed to be cross-sectionally correlated and temporally dependent. A new test statistic combining the functional CUSUM statistic and power enhancement component is proposed with asymptotic null distribution theory comparable to the conventional CUSUM theory derived for a single functional time series. In particular, the extra power enhancement component enlarges the region where the proposed test has power, and results in stable power performance when breaks are sparse in the alternative hypothesis. Furthermore, we impose a latent group structure on the subjects with heterogeneous break points and introduce an easy-to-implement clustering algorithm with an information criterion to consistently estimate the unknown group number and membership. The estimated group structure can subsequently improve the convergence property of the post-clustering break point estimate. Monte-Carlo simulation studies and empirical applications show that the proposed estimation and testing techniques have satisfactory performance in finite samples. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.07003&r=ecm |
By: | Liang Jiang; Liyao Li; Ke Miao; Yichong Zhang |
Abstract: | Our paper identifies a trade-off when using regression adjustments (RAs) in causal inference under covariate-adaptive randomizations (CARs). On one hand, RAs can improve the efficiency of causal estimators by incorporating information from covariates that are not used in the randomization. On the other hand, RAs can degrade estimation efficiency due to their estimation errors, which are not asymptotically negligible when the number of regressors is of the same order as the sample size. Failure to account for the cost of RAs can result in over-rejection of causal inference under the null hypothesis. To address this issue, we develop a unified inference theory for the regression-adjusted average treatment effect (ATE) estimator under CARs. Our theory has two key features: (1) it ensures the exact asymptotic size under the null hypothesis, regardless of whether the number of covariates is fixed or diverges at most at the rate of the sample size, and (2) it guarantees weak efficiency improvement over the ATE estimator with no adjustments. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.08184&r=ecm |
By: | Eric Qian |
Abstract: | Granular instrumental variables have experienced sharp growth in empirical macro-finance. Their attraction lies in their applicability to a wide set of economic environments like demand systems and the estimation of spillovers. I propose a new estimator$\unicode{x2014}$called robust granular instrumental variables (RGIV)$\unicode{x2014}$that, unlike GIV, allows for heterogeneous responses across units to the aggregate variable, unknown shock variances, and does not rely on skewness of the size distribution of units. Its generality allows researchers to account for and study unit-level heterogeneity. I also develop an overidentification test that evaluates the RGIV's compatibility with the data and a parameter restriction test that evaluates the appropriateness of the homogeneous coefficient assumption. In simulations, I show that RGIV produces reliable and informative confidence intervals. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.01273&r=ecm |
By: | Joann Jasiak; Purevdorj Tuvaandorj |
Abstract: | This paper extends three Lasso inferential methods, Debiased Lasso, $C(\alpha)$ and Selective Inference to a survey environment. We establish the asymptotic validity of the inference procedures in generalized linear models with survey weights and/or heteroskedasticity. Moreover, we generalize the methods to inference on nonlinear parameter functions e.g. the average marginal effect in survey logit models. We illustrate the effectiveness of the approach in simulated data and Canadian Internet Use Survey 2020 data. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.07855&r=ecm |
By: | Bulat Gafarov |
Abstract: | It is well known that in the presence of heteroscedasticity ordinary least squares estimator is not efficient. I propose a generalized automatic least squares estimator (GALS) that makes partial correction of heteroscedasticity based on a (potentially) misspecified model without a pretest. Such an estimator is guaranteed to be at least as efficient as either OLS or WLS but can provide some asymptotic efficiency gains over OLS if the misspecified model is approximately correct. If the heteroscedasticity model is correct, the proposed estimator achieves full asymptotic efficiency. The idea is to frame moment conditions corresponding to OLS and WLS squares based on miss-specified heteroscedasticity as a joint generalized method of moments estimation problem. The resulting optimal GMM estimator is equivalent to a feasible GLS with estimated weight matrix. I also propose an optimal GMM variance-covariance estimator for GALS to account for any remaining heteroscedasticity in the residuals. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.07331&r=ecm |
By: | F. Blasques (VU University Medical Center [Amsterdam]); Christian Francq (CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - X - École polytechnique - ENSAE Paris - École Nationale de la Statistique et de l'Administration Économique - CNRS - Centre National de la Recherche Scientifique); Sébastien Laurent (AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | This paper introduces the class of quasi score-driven (QSD) models. This new class inherits and extends the basic ideas behind the development of score-driven (SD) models and addresses a number of unsolved issues in the score literature. In particular, the new class of models (i) generalizes many existing models, including SD models, (ii) disconnects the updating equation from the log-likelihood implied by the conditional density of the observations, (iii) allows testing of the assumptions behind SD models that link the updating equation of the conditional moment to the conditional density, (iv) allows QML estimation of SD models, (v) and allows explanatory variables to enter the updating equation. We establish the asymptotic properties of the QLE, QMLE and MLE of the proposed QSD model as well as the likelihood ratio and Lagrange multiplier test statistics. The finite sample properties are studied by means of an extensive Monte Carlo study. Finally, we show the empirical relevance of QSD models to estimate the conditional variance of 400 US stocks. |
Keywords: | Score-driven models, GARCH, Fat-tails, Asymmetry, QLE, QMLE |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-04069143&r=ecm |
By: | Florian Huber; Massimiliano Marcellino |
Abstract: | Model mis-specification in multivariate econometric models can strongly influence quantities of interest such as structural parameters, forecast distributions or responses to structural shocks, even more so if higher-order forecasts or responses are considered, due to parameter convolution. We propose a simple method for addressing these specification issues in the context of Bayesian VARs. Our method, called coarsened Bayesian VARs (cBVARs), replaces the exact likelihood with a coarsened likelihood that takes into account that the model might be mis-specified along important but unknown dimensions. Coupled with a conjugate prior, this results in a computationally simple model. As opposed to more flexible specifications, our approach avoids overfitting, is simple to implement and estimation is fast. The resulting cBVAR performs well in simulations for several types of mis-specification. Applied to US data, cBVARs improve point and density forecasts compared to standard BVARs, and lead to milder but more persistent negative effects of uncertainty shocks on output. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.07856&r=ecm |
By: | Malte Jahn |
Abstract: | Time series of counts are frequently analyzed using generalized integer-valued autoregressive models with conditional heteroskedasticity (INGARCH). These models employ response functions to map a vector of past observations and past conditional expectations to the conditional expectation of the present observation. In this paper, it is shown how INGARCH models can be combined with artificial neural network (ANN) response functions to obtain a class of nonlinear INGARCH models. The ANN framework allows for the interpretation of many existing INGARCH models as a degenerate version of a corresponding neural model. Details on maximum likelihood estimation, marginal effects and confidence intervals are given. The empirical analysis of time series of bounded and unbounded counts reveals that the neural INGARCH models are able to outperform reasonable degenerate competitor models in terms of the information loss. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.01025&r=ecm |
By: | L. Scaffidi Domianello; E. Otranto |
Abstract: | Markov Switching models have had increasing success in time series analysis due to their ability to capture the existence of unobserved discrete states in the dynamics of the variables under study. This result is generally obtained thanks to the inference on states derived from the so–called Hamilton filter. One of the open problems in this framework is the identification of the number of states, generally fixed a priori; it is in fact impossible to apply classical tests due to the problem of the nuisance parameters present only under the alternative hypothesis. In this work we show, by Monte Carlo simulations, that fuzzy clustering is able to reproduce the parametric state inference derived from the Hamilton filter and that the typical indices used in clustering to determine the number of groups can be used to determine the number of states in this framework. The procedure is very simple to apply, considering that it is performed (in a nonparametric way) independently of the data generation process and that the indicators we use are present in most statistical packages. |
Keywords: | Simulations;Number of states;Nuisance parameters;Markov chains;Groups identification |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:cns:cnscwp:202304&r=ecm |
By: | Daoping Yu; Vytaras Brazauskas; Ricardas Zitikis |
Abstract: | To accommodate numerous practical scenarios, in this paper we extend statistical inference for smoothed quantile estimators from finite domains to infinite domains. We accomplish the task with the help of a newly designed truncation methodology for discrete loss distributions with infinite domains. A simulation study illustrates the methodology in the case of several distributions, such as Poisson, negative binomial, and their zero inflated versions, which are commonly used in insurance industry to model claim frequencies. Additionally, we propose a very flexible bootstrap-based approach for the use in practice. Using automobile accident data and their modifications, we compute what we have termed the conditional five number summary (C5NS) for the tail risk and construct confidence intervals for each of the five quantiles making up C5NS, and then calculate the tail probabilities. The results show that the smoothed quantile approach classifies the tail riskiness of portfolios not only more accurately but also produces lower coefficients of variation in the estimation of tail probabilities than those obtained using the linear interpolation approach. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.02723&r=ecm |
By: | Engsted, Tom; Schneider, Jesper W. (Aarhus University) |
Abstract: | We argue that frequentist hypothesis testing - the dominant statistical evaluation paradigm in empirical research - is fundamentally unsuited for analysis of the nonexperimental data prevalent in economics and other social sciences. Frequentist tests comprise incompatible repeated sampling frameworks that do not obey the Likelihood Principle (LP). For probabilistic inference, methods that are guided by the LP, that do not rely on repeated sampling, and that focus on model comparison instead of testing (e.g., subjectivist Bayesian methods) are better suited for passively observed social science data and are better able to accommodate the huge model uncertainty and highly approximative nature of structural models in the social sciences. In addition to formal probabilistic inference, informal model evaluation along relevant substantive and practical dimensions should play a leading role. We sketch the ideas of an alternative paradigm containing these elements |
Date: | 2023–04–10 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:nztk8&r=ecm |
By: | Charles Beach |
Abstract: | This paper uses distribution-free formulas for the asymptotic variances of sample quantile income shares – as typically published by statistical agencies as measures of the distribution of income inequality – to calculate how large a survey sample must be in order to estimate a more refined quantile breakdown for a given level of confidence. The approach is applied to decile and quintile earnings data to calculate required increases in sample size to obtain tail 5 percent quantal share estimates and to test changes in income shares. Simple rules of thumb are offered for such a required increase. |
Keywords: | Income share standard errors, sample size, statistical inference |
JEL: | C12 C46 D31 D63 |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1505&r=ecm |
By: | Junlong Feng; Sokbae Lee |
Abstract: | We introduce a novel framework for individual-level welfare analysis. It builds on a parametric model for continuous demand with a quasilinear utility function, allowing for unobserved individual-product-level preference shocks. We obtain bounds on the individual-level consumer welfare loss at any confidence level due to a hypothetical price increase, solving a scalable optimization problem constrained by a new confidence set under an independence restriction. This confidence set is computationally simple, robust to weak instruments and nonlinearity, and may have applications beyond welfare analysis. Monte Carlo simulations and two empirical applications on gasoline and food demand demonstrate the effectiveness of our method. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.01921&r=ecm |
By: | Julien Hambuckers; Marie Kratz; Antoine Usseglio-Carleve |
Abstract: | We introduce a method to estimate simultaneously the tail and the threshold parameters of an extreme value regression model. This standard model finds its use in finance to assess the effect of market variables on extreme loss distributions of investment vehicles such as hedge funds. However, a major limitation is the need to select ex ante a threshold below which data are discarded, leading to estimation inefficiencies. To solve these issues, we extend the tail regression model to non-tail observations with an auxiliary splicing density, enabling the threshold to be selected automatically. We then apply an artificial censoring mechanism of the likelihood contributions in the bulk of the data to decrease specification issues at the estimation stage. We illustrate the superiority of our approach for inference over classical peaks-over-threshold methods in a simulation study. Empirically, we investigate the determinants of hedge fund tail risks over time, using pooled returns of 1, 484 hedge funds. We find a significant link between tail risks and factors such as equity momentum, financial stability index, and credit spreads. Moreover, sorting funds along exposure to our tail risk measure discriminates between high and low alpha funds, supporting the existence of a fear premium. |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2304.06950&r=ecm |