|
on Econometrics |
By: | Lihua Lei; Timothy Sudijono |
Abstract: | The synthetic control method is often applied to problems with one treated unit and a small number of control units. A common inferential task in this setting is to test null hypotheses regarding the average treatment effect on the treated. Inference procedures that are justified asymptotically are often unsatisfactory due to (1) small sample sizes that render large-sample approximation fragile and (2) simplification of the estimation procedure that is implemented in practice. An alternative is permutation inference, which is related to a common diagnostic called the placebo test. It has provable Type-I error guarantees in finite samples without simplification of the method, when the treatment is uniformly assigned. Despite this robustness, the placebo test suffers from low resolution since the null distribution is constructed from only $N$ reference estimates, where $N$ is the sample size. This creates a barrier for statistical inference at a common level like $\alpha = 0.05$, especially when $N$ is small. We propose a novel leave-two-out procedure that bypasses this issue, while still maintaining the same finite-sample Type-I error guarantee under uniform assignment for a wide range of $N$. Unlike the placebo test whose Type-I error always equals the theoretical upper bound, our procedure often achieves a lower unconditional Type-I error than theory suggests; this enables useful inference in the challenging regime when $\alpha |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.07152&r=ecm |
By: | Antonio Cosma (DEM, Université du Luxembourg); Katrin Hussinger (DEM, Université du Luxembourg); Gautam Tripathi (DEM, Université du Luxembourg) |
Abstract: | We consider the estimation of finite dimensional parameters identified via a system of conditional moment equalities when at least one of the endogenous variables (outcomes and/or explanatory variables) is missing at random for some individuals in the sample. We derive the semiparametric efficiency bound for estimating the parameters and use it to demonstrate that efficiency gains occur only if there exists at least one endogenous variable that is nonmissing, i.e., observed for all individuals in the sample. We show how to construct “doubly robust” estimators and propose an estimator that achieves the efficiency bound. A simulation study reveals that our estimator works well in medium-sized samples for point estimation as well as for inference. To see what insights our estimator can deliver in empirical applications with very large sample sizes, we revisit the female labor supply model of Angrist and Evans (1998) and show that if there is even medium missingness in female labor income (the outcome variable), then having more than 200, 000 observations is not enough for a researcher using inverse propensity score weighted GMM to find a statistically significant negative effect of having a 3rd child (the endogenous explanatory variable) on labor income. In contrast, our semiparametrically efficient estimator can deliver point estimates of this effect that are comparable to the GMM estimates as well as being statistically significant. |
Keywords: | Conditional moment restrictions, Double robustness, Efficiency bound, Efficient estimation, Smoothed empirical likelihood, Missing at random, Missing endogenous variables. |
JEL: | C14 C30 |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:luc:wpaper:24-01&r=ecm |
By: | Haoyu Wei; Hengrui Cai; Chengchun Shi; Rui Song |
Abstract: | This paper provides robust estimators and efficient inference of causal effects involving multiple interacting mediators. Most existing works either impose a linear model assumption among the mediators or are restricted to handle conditionally independent mediators given the exposure. To overcome these limitations, we define causal and individual mediation effects in a general setting, and employ a semiparametric framework to develop quadruply robust estimators for these causal effects. We further establish the asymptotic normality of the proposed estimators and prove their local semiparametric efficiencies. The proposed method is empirically validated via simulated and real datasets concerning psychiatric disorders in trauma survivors. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.05517&r=ecm |
By: | Christis Katsouris |
Abstract: | This article studies identification and estimation for the network vector autoregressive model with nonstationary regressors. In particular, network dependence is characterized by a nonstochastic adjacency matrix. The information set includes a stationary regressand and a node-specific vector of nonstationary regressors, both observed at the same equally spaced time frequencies. Our proposed econometric specification correponds to the NVAR model under time series nonstationarity which relies on the local-to-unity parametrization for capturing the unknown form of persistence of these node-specific regressors. Robust econometric estimation is achieved using an IVX-type estimator and the asymptotic theory analysis for the augmented vector of regressors is studied based on a double asymptotic regime where both the network size and the time dimension tend to infinity. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.04050&r=ecm |
By: | Xuanling Yang; Dong Li; Ting Zhang |
Abstract: | Economic and financial time series can feature locally explosive behavior when a bubble is formed. The economic or financial bubble, especially its dynamics, is an intriguing topic that has been attracting longstanding attention. To illustrate the dynamics of the local explosion itself, the paper presents a novel, simple, yet useful time series model, called the stochastic nonlinear autoregressive model, which is always strictly stationary and geometrically ergodic and can create long swings or persistence observed in many macroeconomic variables. When a nonlinear autoregressive coefficient is outside of a certain range, the model has periodically explosive behaviors and can then be used to portray the bubble dynamics. Further, the quasi-maximum likelihood estimation (QMLE) of our model is considered, and its strong consistency and asymptotic normality are established under minimal assumptions on innovation. A new model diagnostic checking statistic is developed for model fitting adequacy. In addition two methods for bubble tagging are proposed, one from the residual perspective and the other from the null-state perspective. Monte Carlo simulation studies are conducted to assess the performances of the QMLE and the two bubble tagging methods in finite samples. Finally, the usefulness of the model is illustrated by an empirical application to the monthly Hang Seng Index. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.07038&r=ecm |
By: | Bo E. Honore |
Abstract: | Amemiya (1973) proposed a ``consistent initial estimator'' for the parameters in a censored regression model with normal errors. This paper demonstrates that a similar approach can be used to construct moment conditions for fixed--effects versions of the model considered by Amemiya. This result suggests estimators for models that have not previously been considered. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.04803&r=ecm |
By: | Daniel Dzikowski; Carsten Jentsch |
Abstract: | While seasonality inherent to raw macroeconomic data is commonly removed by seasonal adjustment techniques before it is used for structural inference, this approach might distort valuable information contained in the data. As an alternative method to commonly used structural vector autoregressions (SVAR) for seasonally adjusted macroeconomic data, this paper offers an approach in which the periodicity of not seasonally adjusted raw data is modeled directly by structural periodic vector autoregressions (SPVAR) that are based on periodic vector autoregressions (PVAR) as the reduced form model. In comparison to a VAR, the PVAR does allow not only for periodically time-varying intercepts, but also for periodic autoregressive parameters and innovations variances, respectively. As this larger flexibility leads also to an increased number of parameters, we propose linearly constrained estimation techniques. Overall, SPVARs allow to capture seasonal effects and enable a direct and more refined analysis of seasonal patterns in macroeconomic data, which can provide useful insights into their dynamics. Moreover, based on such SPVARs, we propose a general concept for structural impulse response analyses that takes seasonal patterns directly into account. We provide asymptotic theory for estimators of periodic reduced form parameters and structural impulse responses under flexible linear restrictions. Further, for the construction of confidence intervals, we propose residual-based (seasonal) bootstrap methods that allow for general forms of seasonalities in the data and prove its bootstrap consistency. A real data application on industrial production, inflation and federal funds rate is presented, showing that useful information about the data structure can be lost when using common seasonal adjustment methods. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.14545&r=ecm |
By: | Zongwu Cai (Department of Economics, University of Kansa, Lawrence, KS 66045, USA); Ying Fang (The Wang Yanan Institute for Studies in Economics, Xiamen University, Xiamen, Fujian 361005, China); Dingshi Tian (Department of Statistics, School of Statistics and Mathematics, Zhongnan University of Economics and Law, Wuhan, Hubei 430073, China) |
Abstract: | The estimation and model selection of conditional autoregressive value at risk (CAViaR) model may be computationally intensive and even impractical when the true order of the quantile autoregressive components or the dimension of the other regressors are high. On the other hand, automatic variable selection methods cannot be directly applied to this problem because the quantile lag components are latent. In this paper, we propose to identify the optimal CAViaR model using a two-step approach. The estimation procedure consists of an approximation of the conditional quantile in the first step, followed by an adaptive Lasso penalized quantile regression of the regressors as well as the estimated quantile lag components in the second step. We show that under some mild regularity conditions, the proposed adaptive Lasso penalized quantile estimators enjoy the oracle properties. Finally, the proposed method is illustrated by Monte Carlo simulation study and applied to analyzing the daily data of the S&P500 return series. |
Keywords: | CAViaR model; Adaptive Lasso; Model selection; Tail risk. |
JEL: | C32 C51 C58 |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:kan:wpaper:202403&r=ecm |
By: | Efthymios Pavlidis |
Abstract: | Periodically collapsing bubbles, if they exist, induce asymmetric dynamics in asset prices. In this paper, I show that unit root quantile autoregressive models can approximate such dynamics by allowing the largest autoregressive root to take values below unity at low quantiles, which correspond to price crashes, and above unity at upper quantiles, that correspond to bubble expansions. On this basis, I employ two unit root tests based on quantile regressions to detect bubbles. Monte Carlo simulations suggest that the two tests have good size and power properties, and can outperform recursive least-squares-based tests that allow for time variation in persistence. The merits of the two tests are further illustrated in three empirical applications that examine Bitcoin, U.S. equity and U.S. housing markets. In the empirical applications, special attention is given to the issue of controlling for economic fundamentals. The estimation results indicate the presence of asymmetric dynamics that closely match those of the simulated bubble processes. |
Keywords: | rational bubbles, unit root quantile autoregressions, cryptocurrencies, U.S. house prices, S&P 500 |
JEL: | C12 C22 G10 R30 |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:lan:wpaper:404203101&r=ecm |
By: | Sourabh Balgi; Adel Daoud; Jose M. Pe\~na; Geoffrey T. Wodtke; Jesse Zhou |
Abstract: | Social science theories often postulate causal relationships among a set of variables or events. Although directed acyclic graphs (DAGs) are increasingly used to represent these theories, their full potential has not yet been realized in practice. As non-parametric causal models, DAGs require no assumptions about the functional form of the hypothesized relationships. Nevertheless, to simplify the task of empirical evaluation, researchers tend to invoke such assumptions anyway, even though they are typically arbitrary and do not reflect any theoretical content or prior knowledge. Moreover, functional form assumptions can engender bias, whenever they fail to accurately capture the complexity of the causal system under investigation. In this article, we introduce causal-graphical normalizing flows (cGNFs), a novel approach to causal inference that leverages deep neural networks to empirically evaluate theories represented as DAGs. Unlike conventional approaches, cGNFs model the full joint distribution of the data according to a DAG supplied by the analyst, without relying on stringent assumptions about functional form. In this way, the method allows for flexible, semi-parametric estimation of any causal estimand that can be identified from the DAG, including total effects, conditional effects, direct and indirect effects, and path-specific effects. We illustrate the method with a reanalysis of Blau and Duncan's (1967) model of status attainment and Zhou's (2019) model of conditional versus controlled mobility. To facilitate adoption, we provide open-source software together with a series of online tutorials for implementing cGNFs. The article concludes with a discussion of current limitations and directions for future development. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.06864&r=ecm |
By: | Makoto Takahashi; Yuta Yamauchi; Toshiaki Watanabe; Yasuhiro Omori |
Abstract: | Forecasting volatility and quantiles of financial returns is essential for accurately measuring financial tail risks, such as value-at-risk and expected shortfall. The critical elements in these forecasts involve understanding the distribution of financial returns and accurately estimating volatility. This paper introduces an advancement to the traditional stochastic volatility model, termed the realized stochastic volatility model, which integrates realized volatility as a precise estimator of volatility. To capture the well-known characteristics of return distribution, namely skewness and heavy tails, we incorporate three types of skew-t distributions. Among these, two distributions include the skew-normal feature, offering enhanced flexibility in modeling the return distribution. We employ a Bayesian estimation approach using the Markov chain Monte Carlo method and apply it to major stock indices. Our empirical analysis, utilizing data from US and Japanese stock indices, indicates that the inclusion of both skewness and heavy tails in daily returns significantly improves the accuracy of volatility and quantile forecasts. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.13179&r=ecm |
By: | Chen, Le Yu; Oparina, Ekaterina; Powdthavee, Nattavudh; Srisuma, Sorawoot |
Abstract: | Ordered probit and logit models have been frequently used to estimate the mean ranking of happiness outcomes (and other ordinal data) across groups. However, it has been recently highlighted that such ranking may not be identified in most happiness applications. We suggest researchers focus on median comparison instead of the mean. This is because the median rank can be identified even if the mean rank is not. Furthermore, median ranks in probit and logit models can be readily estimated using standard statistical softwares. The median ranking, as well as ranking for other quantiles, can also be estimated semiparametrically and we provide a new constrained mixed integer optimization procedure for implementation. We apply it to estimate a happiness equation using General Social Survey data of the US. |
Keywords: | median regression; mixed integer optimization; ordered-response model; quantile regression; subjective well-being |
JEL: | C25 C61 I31 |
Date: | 2022–08–01 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:115556&r=ecm |
By: | Xinshuai Dong; Haoyue Dai; Yewen Fan; Songyao Jin; Sathyamoorthy Rajendran; Kun Zhang |
Abstract: | Financial data is generally time series in essence and thus suffers from three fundamental issues: the mismatch in time resolution, the time-varying property of the distribution - nonstationarity, and causal factors that are important but unknown/unobserved. In this paper, we follow a causal perspective to systematically look into these three demons in finance. Specifically, we reexamine these issues in the context of causality, which gives rise to a novel and inspiring understanding of how the issues can be addressed. Following this perspective, we provide systematic solutions to these problems, which hopefully would serve as a foundation for future research in the area. |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.05414&r=ecm |
By: | Rhys Bernard, David; Bryan, Gharad; Chabé-Ferret, Sylvain; De Quidt, Jonathan; Fliegner, Jasmin; Rathelot, Roland |
Abstract: | The use of observational methods remains common in program evaluation. How much should we trust these studies, which lack clear identifying variation? We propose adjusting confidence intervals to incorporate the uncertainty due to observational bias. Using data from 44 development RCTs with imperfect compliance (ICRCTs), we estimate the parameters required to construct our confidence intervals. The results show that, after accounting for potential bias, observational studies have low effective power. Using our adjusted confidence intervals, a hypothetical infinite sample size observational study has a minimum detectable effect size of over 0.3 standard deviations. We conclude that – given current evidence –observational studies are uninformative about many programs that in truth have important effects. There is a silver lining: collecting data from more ICRCTs may help to reduce uncertainty about bias, and increase the effective power of observational program evaluation in the future. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:128965&r=ecm |