|
on Econometrics |
By: | Mathieu Marcoux; Thomas Russell; Yuanyuan Wan |
Abstract: | This paper proposes a simple specification test for partially identified models with a large or possibly uncountably infinite number of conditional moment (in)equalities. The approach is valid under weak assumptions, allowing for both weak identification and non-differentiable moment conditions. Computational simplifications are obtained by reusing certain expensive-to-compute components of the test statistic when constructing the critical values. Because of the weak assumptions, the procedure faces a new set of interesting theoretical issues which we show can be addressed by an unconventional sample-splitting procedure that runs multiple tests of the same null hypothesis. The resulting specification test controls size uniformly over a large class of data generating processes, has power tending to 1 for fixed alternatives, and has power against certain local alternatives which we characterize. Finally, the testing procedure is demonstrated in three simulation exercises. |
Keywords: | Misspecification, Moment Inequality, Partial identification, Specification Testing |
JEL: | C12 |
Date: | 2023–11–14 |
URL: | http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-764&r=ecm |
By: | Torres, Santiago (Universidad de los Andes) |
Abstract: | This paper introduces a new estimator for continuity-based Regression Discontinuity (RD) designs named the estimated Oracle Local Polynomial Estimator (OLPE). The OLPE is a weighted average of a collection of local polynomial estimators, each of which is characterized by a unique bandwidth sequence, polynomial order, and kernel weighting schemes, and whose weights are chosen to minimize the Mean-Squared Error (MSE) of the combination. This procedure yields a new consistent estimator of the target causal effect exhibiting lower bias and/or variance than its components. The precision gains stem from two factors. First, the method allocates more weight to estimators with lower asymptotic mean squared error, allowing it to select the specifications that are best suited to the specific estimation problem. Second, even if the individual estimators are not optimal, averaging mechanically leads to bias reduction and variance shrinkage. Although the OLPE weights are unknown, an “estimated” OLPE can be constructed by replacing unobserved MSE-optimal weights with those derived from a consistent estimator. Monte Carlo simulations indicate that the estimated OLPE can significantly enhance precision compared to conventional local polynomial methods, even in small sample sizes. The estimated OLPE remains consistent and asymptotically normal without imposing additional assumptions beyond those required for local polynomial estimators. Moreover, this approach applies to sharp, fuzzy, and kink RD designs, with or without covariates. |
Keywords: | Regression Discontinuity Designs; Non-parametric Estimation; Local Polynomial Estimators; Causal Inference; Mean-Squared Error. |
JEL: | C14 C46 C52 |
Date: | 2023–11–10 |
URL: | http://d.repec.org/n?u=RePEc:col:000089:020937&r=ecm |
By: | Amrei Stammann |
Abstract: | Naive maximum likelihood estimation of binary logit models with fixed effects leads to unreliable inference due to the incidental parameter problem. We study the case of three-dimensional panel data, where the model includes three sets of additive and overlapping unobserved effects. This encompasses models for network panel data, where senders and receivers maintain bilateral relationships over time, and fixed effects account for unobserved heterogeneity at the sender-time, receiver-time, and sender-receiver levels. In an asymptotic framework, where all three panel dimensions grow large at constant relative rates, we characterize the leading bias of the naive estimator. The inference problem we identify is particularly severe, as it is not possible to balance the order of the bias and the standard deviation. As a consequence, the naive estimator has a degenerating asymptotic distribution, which exacerbates the inference problem relative to other fixed effects estimators studied in the literature. To resolve the inference problem, we derive explicit expressions to debias the fixed effects estimator. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.04073&r=ecm |
By: | Chaohua Dong; Jiti Gao; Bin Peng; Yayi Yan |
Abstract: | In this paper, we consider estimation and inference for both the multi-index parameters and the link function involved in a class of semiparametric multi-index models via deep neural networks (DNNs). We contribute to the design of DNN by i) providing more transparency for practical implementation, ii) defining different types of sparsity, iii) showing the differentiability, iv) pointing out the set of effective parameters, and v) offering a new variant of rectified linear activation function (ReLU), etc. Asymptotic properties for the joint estimates of both the index parameters and the link functions are established, and a feasible procedure for the purpose of inference is also proposed. We conduct extensive numerical studies to examine the finite-sample performance of the estimation methods, and we also evaluate the empirical relevance and applicability of the proposed models and estimation methods to real data. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.02789&r=ecm |
By: | Michal Koles\'ar; Ulrich K. M\"uller; Sebastian T. Roelsgaard |
Abstract: | We show, using three empirical applications, that linear regression estimates which rely on the assumption of sparsity are fragile in two ways. First, we document that different choices of the regressor matrix that don't impact ordinary least squares (OLS) estimates, such as the choice of baseline category with categorical controls, can move sparsity-based estimates two standard errors or more. Second, we develop two tests of the sparsity assumption based on comparing sparsity-based estimators with OLS. The tests tend to reject the sparsity assumption in all three applications. Unless the number of regressors is comparable to or exceeds the sample size, OLS yields more robust results at little efficiency cost. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.02299&r=ecm |
By: | Tymon S{\l}oczy\'nski; S. Derya Uysal; Jeffrey M. Wooldridge |
Abstract: | We show that when the propensity score is estimated using a suitable covariate balancing procedure, the commonly used inverse probability weighting (IPW) estimator, augmented inverse probability weighting (AIPW) with linear conditional mean, and inverse probability weighted regression adjustment (IPWRA) with linear conditional mean are all numerically the same for estimating the average treatment effect (ATE) or the average treatment effect on the treated (ATT). Further, suitably chosen covariate balancing weights are automatically normalized, which means that normalized and unnormalized versions of IPW and AIPW are identical. For estimating the ATE, the weights that achieve the algebraic equivalence of IPW, AIPW, and IPWRA are based on propensity scores estimated using the inverse probability tilting (IPT) method of Graham, Pinto and Egel (2012). For the ATT, the weights are obtained using the covariate balancing propensity score (CBPS) method developed in Imai and Ratkovic (2014). These equivalences also make covariate balancing methods attractive when the treatment is confounded and one is interested in the local average treatment effect. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.18563&r=ecm |
By: | Hui Lan; Vasilis Syrgkanis |
Abstract: | Accurate estimation of conditional average treatment effects (CATE) is at the core of personalized decision making. While there is a plethora of models for CATE estimation, model selection is a nontrivial task, due to the fundamental problem of causal inference. Recent empirical work provides evidence in favor of proxy loss metrics with double robust properties and in favor of model ensembling. However, theoretical understanding is lacking. Direct application of prior theoretical work leads to suboptimal oracle model selection rates due to the non-convexity of the model selection problem. We provide regret rates for the major existing CATE ensembling approaches and propose a new CATE model ensembling approach based on Q-aggregation using the doubly robust loss. Our main result shows that causal Q-aggregation achieves statistically optimal oracle model selection regret rates of $\frac{\log(M)}{n}$ (with $M$ models and $n$ samples), with the addition of higher-order estimation error terms related to products of errors in the nuisance functions. Crucially, our regret rate does not require that any of the candidate CATE models be close to the truth. We validate our new method on many semi-synthetic datasets and also provide extensions of our work to CATE model selection with instrumental variables and unobserved confounding. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.16945&r=ecm |
By: | Verena Monschang; Mark Trede; Bernd Wilfling |
Abstract: | Quaedvlieg (2021, Journal of Business & Economic Statistics) proposes a uniform Superior Predictive Ability (uSPA) test for comparing forecasts across multiple horizons. The procedure is based on a 'minimum Diebold-Mariano' test statistic, and asymptotic critical values are obtained via bootstrapping. We show, theoretically and via simulations, that Quaedvlieg's test is subject to massive size distortions. In this article, we establish several convergence results for the 'minimum Diebold- Mariano' statistic, revealing that appropriate asymptotic critical values are given by the quantiles of the standard normal distribution. The uSPA test modified this way (i) always keeps the nominal size, (ii) is size-exploiting along the boundary that separates the parameter subsets of the null and the alternative uSPA hypotheses, and (iii) is consistent. Based on the closed skew normal distribution, we present a procedure for approximating the power function and demonstrate the favorable finite-sample properties of our test. In an empirical replication, we find that Quaedvlieg's (2021) results on uSPA comparisons between direct and iterative forecasting methods are statistically inconclusive. |
Keywords: | Forecast evaluation, Joint-hypothesis testing, Stochastic dominance, Closed skew normal distribution |
JEL: | C12 C15 C52 C53 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:cqe:wpaper:10623&r=ecm |
By: | Daniel C. Schneider (Max Planck Institute for Demographic Research, Rostock, Germany) |
Abstract: | This paper lays out several new asymptotic inference results for discrete-time multistate models. First, it derives asymptotic covariance matrices for the outcome statistics of conditional and/or state expectancies, mean age at first entry, and lifetime risk. It then discusses group comparisons of these outcome measures, which require the calculation of a joint covariance matrix of two or more results. Finally, new procedures are presented for the estimation of multistate models over a partial age range, and how these subrange calculations relate to the result that is obtained from the full age range of the model. All newly derived expressions are compared against bootstrap results in order to verify correctness of results and to assess performance. |
Keywords: | multi-state life tables, statistical analysis |
JEL: | J1 Z0 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:dem:wpaper:wp-2023-041&r=ecm |
By: | Didier Nibbering |
Abstract: | The number of parameters in a standard multinomial logit model increases linearly with the number of choice alternatives and number of explanatory variables. Since many modern applications involve large choice sets with categorical explanatory variables, which enter the model as large sets of binary dummies, the number of parameters in a multinomial logit model is often large. This paper proposes a new method for data-driven two-way parameter clustering over outcome categories and explanatory dummy categories in a multinomial logit model. A Bayesian Dirichlet process mixture model encourages parameters to cluster over the categories, which reduces the number of unique model parameters and provides interpretable clusters of categories. In an empirical application, we estimate the holiday preferences of 11 household types over 49 holiday destinations, and identify a small number of household segments with different preferences across clusters of holiday destinations. |
Keywords: | large choice sets, Dirichlet process prior, multinomial logit model, highdimensional models |
JEL: | C11 C14 C25 C35 C51 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2023-19&r=ecm |
By: | Yi Zhang; Kosuke Imai |
Abstract: | While there now exists a large literature on policy evaluation and learning, much of prior work assumes that the treatment assignment of one unit does not affect the outcome of another unit. Unfortunately, ignoring interference may lead to biased policy evaluation and yield ineffective learned policies. For example, treating influential individuals who have many friends can generate positive spillover effects, thereby improving the overall performance of an individualized treatment rule (ITR). We consider the problem of evaluating and learning an optimal ITR under clustered network (or partial) interference where clusters of units are sampled from a population and units may influence one another within each cluster. Under this model, we propose an estimator that can be used to evaluate the empirical performance of an ITR. We show that this estimator is substantially more efficient than the standard inverse probability weighting estimator, which does not impose any assumption about spillover effects. We derive the finite-sample regret bound for a learned ITR, showing that the use of our efficient evaluation estimator leads to the improved performance of learned policies. Finally, we conduct simulation and empirical studies to illustrate the advantages of the proposed methodology. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.02467&r=ecm |
By: | Xiaoyu Cheng |
Abstract: | A decision-maker (DM) faces uncertainty governed by a data-generating process (DGP), which is only known to belong to a set of sequences of independent but possibly non-identical distributions. A robust decision maximizes the DM's expected payoff against the worst possible DGP in this set. This paper studies how such robust decisions can be improved with data, where improvement is measured by expected payoff under the true DGP. In this paper, I fully characterize when and how such an improvement can be guaranteed under all possible DGPs and develop inference methods to achieve it. These inference methods are needed because, as this paper shows, common inference methods (e.g., maximum likelihood or Bayesian) often fail to deliver such an improvement. Importantly, the developed inference methods are given by simple augmentations to standard inference procedures, and are thus easy to implement in practice. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.16281&r=ecm |
By: | Daniel C. Schneider (Max Planck Institute for Demographic Research, Rostock, Germany); Mikko Myrskylä (Max Planck Institute for Demographic Research, Rostock, Germany) |
Abstract: | Markov Chains with rewards (MCWR) have been shown to be a useful modelling extension to discrete-time multistate models (DTMS). In this paper, we substantially improve and extend the possibilities that MCWR holds for DTMS. We make several contributions. First, we develop a system of creating and naming different rewards schemes, so-called "standard rewards". While some of these schemes are of interest in their own right, several new possibilities emerge when dividing one rewards result by another, the result of which we call "composite rewards". In total, we can define at least ten new useful outcome statistics based on MCWR that have not yet been used in the literature. Secondly, we derive expressions for asymptotic covariance matrices that are applicable for any standard rewards definition. Thirdly, we show how joint covariance matrices of two or more rewards results can be obtained, which leads to expressions for covariance matrices of composite rewards. Lastly, expressions for point estimates and covariance matrices of partial age ranges are derived. We confirm correctness of results by comparisons to simulation-based results (point estimates) and by comparisons to bootstrap-based results (covariance matrices). |
Keywords: | multi-state life tables, statistical analysis |
JEL: | J1 Z0 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:dem:wpaper:wp-2023-042&r=ecm |
By: | TAYANAGI, Toshikazu; 田柳, 俊和; KUROZUMI, Eiji; 黒住, 英司 |
Keywords: | Structural break, structural change, break point, break fraction |
JEL: | C13 C22 |
Date: | 2023–11–14 |
URL: | http://d.repec.org/n?u=RePEc:hit:econdp:2023-04&r=ecm |
By: | Weihuan Huang |
Abstract: | CoVaR (conditional value-at-risk) is a crucial measure for assessing financial systemic risk, which is defined as a conditional quantile of a random variable, conditioned on other random variables reaching specific quantiles. It enables the measurement of risk associated with a particular node in financial networks, taking into account the simultaneous influence of risks from multiple correlated nodes. However, estimating CoVaR presents challenges due to the unobservability of the multivariate-quantiles condition. To address the challenges, we propose a two-step nonparametric estimation approach based on Monte-Carlo simulation data. In the first step, we estimate the unobservable multivariate-quantiles using order statistics. In the second step, we employ a kernel method to estimate the conditional quantile conditional on the order statistics. We establish the consistency and asymptotic normality of the two-step estimator, along with a bandwidth selection method. The results demonstrate that, under a mild restriction on the bandwidth, the estimation error arising from the first step can be ignored. Consequently, the asymptotic results depend solely on the estimation error of the second step, as if the multivariate-quantiles in the condition were observable. Numerical experiments demonstrate the favorable performance of the two-step estimator. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.18658&r=ecm |
By: | Costola, Michele; Iacopini, Matteo; Wichers, Casper |
Abstract: | A novel spatial autoregressive model for panel data is introduced, which incorporates multilayer networks and accounts for time-varying relationships. Moreover, the proposed approach allows the structural variance to evolve smoothly over time and enables the analysis of shock propagation in terms of time-varying spillover effects. The framework is applied to analyse the dynamics of international relationships among the G7 economies and their impact on stock market returns and volatilities. The findings underscore the substantial impact of cooperative interactions and highlight discernible disparities in network exposure across G7 nations, along with nuanced patterns in direct and indirect spillover effects. |
Keywords: | Bayesian inference, International relationships, Multilayer networks, Spatial autoregressive model, Time-varying networks, Stochastic volatility |
JEL: | C11 C33 C51 C58 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:zbw:safewp:279783&r=ecm |
By: | Philippe Goulet Coulombe (University of Quebec in Montreal); Mikael Frenette (University of Quebec in Montreal); Karin Klieber (Oesterreichische Nationalbank) |
Abstract: | We reinvigorate maximum likelihoode stimation (MLE) for macroeconomic density forecasting through a novel neural network architecture with dedicated mean and variance hemispheres. Our architecture features several key ingredients making MLE work in this context. First, the hemispheres share a common core at the entrance of the network which accommodates for various forms of time variation in the error variance. Second, we introducea volatility emphasis constraint that breaks mean/variance indeterminacy in this class of overparametrized nonlinear models. Third, we conduct a blocked out-of-bag reality check to curb overfitting in both conditional moments.Fourth, the algorithm utilizes standard deep learning software and thus handles large datasets – both computationally and statistically. Ergo, our Hemisphere Neural Network (HNN) provides proactive volatility forecasts based on leading indicators when it can, and reactive volatility based on the magnitude of previous prediction errors when it must. We evaluate point and density forecasts with an extensive out-of-sample experiment and benchmark against a suite of models ranging from classics to more modern machine learning-based offerings. In all cases, HNN fares well by consistently providing accurate mean/variance forecasts for all targets and horizons. Studying the resulting volatility paths reveals its versatility, while probabilistic forecasting evaluation metrics showcase its enviable reliability. Finally, we also demonstrate how this machinery can be merged with other structured deep learning models by revisiting Goulet Coulombe(2022)’s Neural Phillips Curve. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:bbh:wpaper:23-04&r=ecm |
By: | Rubin, Mark (Durham University) |
Abstract: | The inflation of Type I error rates is thought to be one of the causes of the replication crisis. Questionable research practices such as p-hacking are thought to inflate Type I error rates above their nominal level, leading to unexpectedly high levels of false positives in the literature and, consequently, unexpectedly low replication rates. In this article, I offer an alternative view. I argue that questionable and other research practices do not inflate Type I error rates that are relevant to the statistical inferences that researchers usually make. I begin by introducing the concept of Type I error rates. I then illustrate my argument with respect to model misspecification, multiple testing, selective inference, forking paths, exploratory analyses, p-hacking, optional stopping, double dipping, and HARKing. In each case, I demonstrate that relevant Type I error rates are not usually inflated above their nominal level and, in the rare cases that they are, the inflation is easily identified and resolved. I conclude that the replication crisis may be more a crisis of theoretical inference than of statistical inference. |
Date: | 2023–11–06 |
URL: | http://d.repec.org/n?u=RePEc:osf:metaar:3kv2b&r=ecm |
By: | Felix Chan; Laurent Pauwels |
Abstract: | The optimal aggregation of forecasts produced either from models or expert judgements presents an interesting challenge for managerial decisions. Mean absolute error (MAE) and mean squared error (MSE) losses are commonly employed as criteria of optimality to obtain the weights that combine multiple forecasts. While much is known about MSE in the context of forecast combination, less attention has been given to MAE. This paper shows that the optimal solutions from minimizing either MAE or MSE loss functions, i.e., the optimal weights, are equivalent provided that the weights sum to one. The equivalence holds under mild assumptions and includes a wide class of symmetric and asymmetric error distributions. The theoretical results are supported by a numerical study that features skewed and fat-tailed distributions. The practical implications of combining forecasts with MAE and MSE optimal weights are investigated empirically with a small sample of data on expert forecasts on inflation, growth, and unemployment rates for the European Union. The results show that MAE weights are less sensitive to outliers, and MSE and MAE weights can be close to equivalent even when the sample is small. |
Keywords: | forecasting, forecast combination, optimization, mean absolute error, optimal weights |
JEL: | C53 C61 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2023-59&r=ecm |