|
on Econometrics |
By: | Arboleda Cárcamo, David (Universidad de los Andes) |
Abstract: | Event-study designs rely on the validity of an identification assumption commonly referred to as Generalized Parallel Trends (GPT). This paper focuses on the problem of performing consistent estimation and statistical inference over the treatment effects when GPT holds only after recursively differentiating the outcome variable. Under this milder assumption, I construct a correction method that yields consistent estimators for the causal effects of interest through linear transformations of an initial vector of estimated treatment effects, and that consequently does not require estimating any new parameters. The correction method also provides a natural statistical test to empirically validate the GPT assumption, which I compare against two other alternative test commonly employed in practice. Simulations under 12 different DGPs calibrated using top journal papers suggest that my new test always outperforms other two alternatives in every setting, with its power function being up to 160% greater, while inducing little to none pretesting bias when conditioning the analysis to the non-rejection of the null. |
Keywords: | Difference-in-Differences; Parallel Trends; Event-study; Empirical validation; Robust estimation. |
JEL: | C12 C13 C14 |
Date: | 2024–09–27 |
URL: | https://d.repec.org/n?u=RePEc:col:000089:021199 |
By: | Gregory Fletcher Cox |
Abstract: | Inequalities may appear in many models. They can be as simple as assuming a parameter is nonnegative, possibly a regression coefficient or a treatment effect. This paper focuses on the case that there is only one inequality and proposes a confidence interval that is particularly attractive, called the inequality-imposed confidence interval (IICI). The IICI is simple. It does not require simulations or tuning parameters. The IICI is adaptive. It reduces to the usual confidence interval (calculated by adding and subtracting the standard error times the $1 - \alpha/2$ standard normal quantile) when the inequality is sufficiently slack. When the inequality is sufficiently violated, the IICI reduces to an equality-imposed confidence interval (the usual confidence interval for the submodel where the inequality holds with equality). Also, the IICI is uniformly valid and has (weakly) shorter length than the usual confidence interval; it is never longer. The first empirical application considers a linear regression when a coefficient is known to be nonpositive. A second empirical application considers an instrumental variables regression when the endogeneity of a regressor is known to be nonnegative. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.09962 |
By: | Jean-Marie Dufour; Endong Wang |
Abstract: | This paper introduces a novel two-stage estimation and inference procedure for generalized impulse responses (GIRs). GIRs encompass all coefficients in a multi-horizon linear projection model of future outcomes of y on lagged values (Dufour and Renault, 1998), which include the Sims' impulse response. The conventional use of Least Squares (LS) with heteroskedasticity- and autocorrelation-consistent covariance estimation is less precise and often results in unreliable finite sample tests, further complicated by the selection of bandwidth and kernel functions. Our two-stage method surpasses the LS approach in terms of estimation efficiency and inference robustness. The robustness stems from our proposed covariance matrix estimates, which eliminate the need to correct for serial correlation in the multi-horizon projection residuals. Our method accommodates non-stationary data and allows the projection horizon to grow with sample size. Monte Carlo simulations demonstrate our two-stage method outperforms the LS method. We apply the two-stage method to investigate the GIRs, implement multi-horizon Granger causality test, and find that economic uncertainty exerts both short-run (1-3 months) and long-run (30 months) effects on economic activities. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.10820 |
By: | Taras Bodnar; Nikolaus Hautsch; Yarema Okhrin; Nestor Parolya |
Abstract: | In this paper, we analyze the asymptotic behavior of the main characteristics of the mean-variance efficient frontier employing random matrix theory. Our particular interest covers the case when the dimension $p$ and the sample size $n$ tend to infinity simultaneously and their ratio $p/n$ tends to a positive constant $c\in(0, 1)$. We neither impose any distributional nor structural assumptions on the asset returns. For the developed theoretical framework, some regularity conditions, like the existence of the $4$th moments, are needed. It is shown that two out of three quantities of interest are biased and overestimated by their sample counterparts under the high-dimensional asymptotic regime. This becomes evident based on the asymptotic deterministic equivalents of the sample plug-in estimators. Using them we construct consistent estimators of the three characteristics of the efficient frontier. It it shown that the additive and/or the multiplicative biases of the sample estimates are solely functions of the concentration ratio $c$. Furthermore, the asymptotic normality of the considered estimators of the parameters of the efficient frontier is proved. Verifying the theoretical results based on an extensive simulation study we show that the proposed estimator for the efficient frontier is a valuable alternative to the sample estimator for high dimensional data. Finally, we present an empirical application, where we estimate the efficient frontier based on the stocks included in S\&P 500 index. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.15103 |
By: | Jinyong Hahn (UCLA); Zhipeng Liao (UCLA); Nan Liu (Xiamen University); Ruoyao Shi (Department of Economics, University of California Riverside) |
Abstract: | We examine econometric inferential issues with Hausman instruments. The instrumental variable (IV) estimator based on Hausman instrument has a built-in correlation across observations, which may render the textbook-style standard error invalid. We develop a standard error that is robust to these problems. Clustered standard error is not always valid, but it can be a good pragmatic compromise to deal with the interlinkage problem if Hausman instrument is to be used in econometric models in the tradition of Berry, Levinsohn, and Pakes (1995). |
Keywords: | BLP, Hausman instrument Judge instrument, Stable convergence, Uniformly valid inference |
JEL: | C14 C33 C36 |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:ucr:wpaper:202405 |
By: | Rios-Avila, Fernando (Levy Economics Institute); Siles, Leonardo (Universidad de Chile); Canavire Bacarreza, Gustavo J. (World Bank) |
Abstract: | This paper proposes a new method to estimate quantile regressions with multiple fixed effects. The method, which expands on the strategy proposed by Machado and Santos Silva (2019), allows for the inclusion of multiple fixed effects and provides various alternatives for estimating standard errors. We provide Monte Carlo simulations to show the finite sample properties of the proposed method in the presence of two sets of fixed effects. Finally, we apply the proposed method to two different examples using macroeconomic and microeconomic data and allowing for multiple fixed effects with robust results. |
Keywords: | fixed effects, linear heteroskedasticity, location-scale model |
JEL: | C21 C22 C23 |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp17262 |
By: | Jannik Kreye; Philipp Sibbertsen |
Abstract: | We propose a test to detect a forecast accuracy breakdown in a long memory time series and provide theoretical and simulation evidence on the memory transfer from the time series to the forecast residuals. The proposed method uses a double sup-Wald test against the alternative of a structural break in the mean of an out-of-sample loss series. To address the problem of estimating the long-run variance under long memory, a robust estimator is applied. The corresponding breakpoint results from a long memory robust CUSUM test. The finite sample size and power properties of the test are derived in a Monte Carlo simulation. A monotonic power function is obtained for the fixed forecasting scheme. In our practical application, we find that the global energy crisis that began in 2021 led to a forecast break in European electricity prices, while the results for the U.S. are mixed. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.07087 |
By: | Yuehao Bai; Shunzhuang Huang; Sarah Moon; Azeem Shaikh; Edward J. Vytlacil |
Abstract: | In the context of a binary outcome, treatment, and instrument, Balke and Pearl (1993, 1997) establish that the monotonicity condition of Imbens and Angrist (1994) has no identifying power beyond instrument exogeneity for average potential outcomes and average treatment effects in the sense that adding it to instrument exogeneity does not decrease the identified sets for those parameters whenever those restrictions are consistent with the distribution of the observable data. This paper shows that this phenomenon holds in a broader setting with a multi-valued outcome, treatment, and instrument, under an extension of the monotonicity condition that we refer to as generalized monotonicity. We further show that this phenomenon holds for any restriction on treatment response that is stronger than generalized monotonicity provided that these stronger restrictions do not restrict potential outcomes. Importantly, many models of potential treatments previously considered in the literature imply generalized monotonicity, including the types of monotonicity restrictions considered by Kline and Walters (2016), Kirkeboen et al. (2016), and Heckman and Pinto (2018), and the restriction that treatment selection is determined by particular classes of additive random utility models. We show through a series of examples that restrictions on potential treatments can provide identifying power beyond instrument exogeneity for average potential outcomes and average treatment effects when the restrictions imply that the generalized monotonicity condition is violated. In this way, our results shed light on the types of restrictions required for help in identifying average potential outcomes and average treatment effects. |
JEL: | C31 C35 C36 |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:32983 |
By: | Silvana Tiedemann (Centre for Sustainability, Hertie School); Jorge Sanchez Canales (Centre for Sustainability, Hertie School); Felix Schur (Department of Mathematics, ETH Zurich); Raffaele Sgarlato (Centre for Sustainability, Hertie School); Lion Hirth (Centre for Sustainability, Hertie School); Oliver Ruhnau (Department of Economics and Institute of Energy Economics, University of Cologne); Jonas Peters (Department of Mathematics, ETH Zurich) |
Abstract: | The price elasticity of demand can be estimated from observational data using instrumental variables (IV). However, naive IV estimators may be inconsistent in settings with autocorrelated time series. We argue that causal time graphs can simplify IV identification and help select consistent estimators. To do so, we propose to first model the equilibrium condition by an unobserved confounder, deriving a directed acyclic graph (DAG) while maintaining the assumption of a simultaneous determination of prices and quantities. We then exploit recent advances in graphical inference to derive valid IV estimators, including estimators that achieve consistency by simultaneously estimating nuisance effects. We further argue that observing significant differences between the estimates of presumably valid estimators can help to reject false model assumptions, thereby improving our understanding of underlying economic dynamics. We apply this approach to the German electricity market, estimating the price elasticity of demand on simulated and real-world data. The findings underscore the importance of accounting for structural autocorrelation in IV-based analysis. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.15530 |
By: | Liang Zhong |
Abstract: | In social networks or spatial experiments, one unit's outcome often depends on another's treatment, a phenomenon called interference. Researchers are interested in not only the presence and magnitude of interference but also its pattern based on factors like distance, neighboring units, and connection strength. However, the non-random nature of these factors and complex correlations across units pose challenges for inference. This paper introduces the partial null randomization tests (PNRT) framework to address these issues. The proposed method is finite-sample valid and applicable with minimal network structure assumptions, utilizing randomization testing and pairwise comparisons. Unlike existing conditional randomization tests, PNRT avoids the need for conditioning events, making it more straightforward to implement. Simulations demonstrate the method's desirable power properties and its applicability to general interference scenarios. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.09243 |
By: | Vainora, J. |
Abstract: | This paper proposes to use the Generalized Random Dot Product Graph model and the underlying latent positions to model parameter heterogeneity. We discuss how the Stochastic Block Model can be directly applied to model individual parameter heterogeneity. We also develop a new procedure to model pairwise parameter heterogeneity requiring the number of distinct latent distances between unobserved communities to be low. It is proven that, asymptotically, the heterogeneity pattern can be completely recovered. Additionally, we provide three test statistics for the assumption on the number of distinct latent distances. The proposed methods are illustrated using data on a household microfinance program and the S&P 500 component stocks. |
Keywords: | Networks, Spectral Embedding, Clustering, Generalized Random Dot Product Graph, Stochastic Block Model |
JEL: | C10 C55 |
Date: | 2024–10–01 |
URL: | https://d.repec.org/n?u=RePEc:cam:camdae:2455 |
By: | Thiago Trafane Oliveira Santos (Central Bank of Brazil, Bras\'ilia, Brazil. Department of %Economics, University of Brasilia, Brazil); Daniel Oliveira Cajueiro (Department of Economics, University of Brasilia, Brazil. National Institute of Science and Technology for Complex Systems) |
Abstract: | Even though practitioners often estimate Pareto exponents running OLS rank-size regressions, the usual recommendation is to use the Hill MLE with a small-sample correction instead, due to its unbiasedness and efficiency. In this paper, we advocate that you should also apply OLS in empirical applications. On the one hand, we demonstrate that, with a small-sample correction, the OLS estimator is also unbiased. On the other hand, we show that the MLE assigns significantly greater weight to smaller observations. This suggests that the OLS estimator may outperform the MLE in cases where the distribution is (i) strictly Pareto but only in the upper tail or (ii) regularly varying rather than strictly Pareto. We substantiate our theoretical findings with Monte Carlo simulations and real-world applications, demonstrating the practical relevance of the OLS method in estimating tail exponents. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.10448 |
By: | Giuseppe Cavaliere; Iliyan Georgiev; Edoardo Zanelli |
Abstract: | We consider bootstrap inference in predictive (or Granger-causality) regressions when the parameter of interest may lie on the boundary of the parameter space, here defined by means of a smooth inequality constraint. For instance, this situation occurs when the definition of the parameter space allows for the cases of either no predictability or sign-restricted predictability. We show that in this context constrained estimation gives rise to bootstrap statistics whose limit distribution is, in general, random, and thus distinct from the limit null distribution of the original statistics of interest. This is due to both (i) the possible location of the true parameter vector on the boundary of the parameter space, and (ii) the possible non-stationarity of the posited predicting (resp. Granger-causing) variable. We discuss a modification of the standard fixed-regressor wild bootstrap scheme where the bootstrap parameter space is shifted by a data-dependent function in order to eliminate the portion of limiting bootstrap randomness attributable to the boundary, and prove validity of the associated bootstrap inference under non-stationarity of the predicting variable as the only remaining source of limiting bootstrap randomness. Our approach, which is initially presented in a simple location model, has bearing on inference in parameter-on-the-boundary situations beyond the predictive regression problem. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.12611 |
By: | Endong Wang |
Abstract: | We propose a structural model-free methodology to analyze two types of macroeconomic counterfactuals related to policy path deviation: hypothetical trajectory and policy intervention. Our model-free approach is built on a structural vector moving-average (SVMA) model that relies solely on the identification of policy shocks, thereby eliminating the need to specify an entire structural model. Analytical solutions are derived for the counterfactual parameters, and statistical inference for these parameter estimates is provided using the Delta method. By utilizing external instruments, we introduce a projection-based method for the identification, estimation, and inference of these parameters. This approach connects our counterfactual analysis with the Local Projection literature. A simulation-based approach with nonlinear model is provided to add in addressing Lucas' critique. The innovative model-free methodology is applied in three counterfactual studies on the U.S. monetary policy: (1) a historical scenario analysis for a hypothetical interest rate path in the post-pandemic era, (2) a future scenario analysis under either hawkish or dovish interest rate policy, and (3) an evaluation of the policy intervention effect of an oil price shock by zeroing out the systematic responses of the interest rate. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.09577 |
By: | Bhuyan, Prajamitra; Jana, Kaushik; McCoy, Emma J. |
Abstract: | Transport engineers employ various interventions to enhance traffic-network performance. Quantifying the impacts of Cycle Superhighways is complicated due to the non-random assignment of such an intervention over the transport network. Treatment effects on asymmetric and heavy-tailed distributions are better reflected at extreme tails rather than at the median. We propose a novel method to estimate the treatment effect at extreme tails incorporating heavy-tailed features in the outcome distribution. The analysis of London transport data using the proposed method indicates that the extreme traffic flow increased substantially after Cycle Superhighways came into operation. |
Keywords: | causality; extreme value analysis; heavy-tailed distribution; potential outcome; quantile regression; transport engineering; AAM requested |
JEL: | C1 |
Date: | 2023–11–01 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:121622 |
By: | Tomás E. Caravello; Alisdair McKay; Christian K. Wolf |
Abstract: | In a rich family of linearized structural macroeconomic models, the counterfactual evolution of the macro-economy under alternative policy rules is pinned down by just two objects: first, reduced-form projections with respect to a large information set; and second, the dynamic causal effects of policy shocks. In particular, no assumptions about the structural shocks affecting the economy are needed. We propose to recover these two sufficient statistics using a ``VAR-Plus'' approach, and apply it to evaluate several monetary policy counterfactuals. |
JEL: | E32 E58 E61 |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:32988 |
By: | Didier Sornette (Risks-X, Southern University of Science and Technology (SUSTech); Swiss Finance Institute); Ran Wei (ETH Zürich) |
Abstract: | We introduce two ratio-based robust test statistics, max-robust-sum (MRS) and sum-robust-sum (SRS), designed to enhance the robustness of outlier detection in samples with exponential or Pareto tails. We also reintroduce the inward sequential testing method-formerly relegated since the introduction of outward testing-and show that MRS and SRS tests reduce susceptibility of the inward approach to masking, making the inward test as powerful as, and potentially less error-prone than, outward tests. Moreover, inward testing does not require the complicated type I error control of outward tests. A comprehensive comparison of the test statistics is done, considering performance of the proposed tests in both block and sequential tests, and contrasting their performance with classical test statistics across various data scenarios. In five case studies-financial crashes, nuclear power generation accidents, stock market returns, epidemic fatalities, and city sizes-significant outliers are detected and related to the concept of 'Dragon King' events, defined as meaningful outliers that arise from a unique generating mechanism. |
Keywords: | Outlier detection, Exponential sample, Pareto sample, Dragon King, Extreme Value Theory |
JEL: | C10 C46 C49 G01 |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:chf:rpseri:rp2448 |
By: | Zhang, Pengcheng; Chen, Zezhun; Tzougas, George; Calderín–Ojeda, Enrique; Dassios, Angelos; Wu, Xueyuan |
Abstract: | The objective of this article is to propose a comprehensive solution for analyzing multidimensional non-life claim count data that exhibits time and cross-dependence, as well as zero inflation. To achieve this, we introduce a multivariate INAR(1) model, with the innovation term characterized by either a multivariate zero-inflated Poisson distribution or a multivariate zero-inflated hurdle Poisson distribution. Additionally, our modeling framework accounts for the impact of individual and coverage-specific covariates on the mean parameters of each model, thereby facilitating the computation of customized insurance premiums based on varying risk profiles. To estimate the model parameters, we employ a novel expectation-maximization (EM) algorithm. Our model demonstrates satisfactory performance in the analysis of European motor third-party liability claim count data. |
JEL: | C1 |
Date: | 2024–09–19 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:124317 |