|
on Econometrics |
By: | Jo\~ao Nicolau; Paulo M. M. Rodrigues |
Abstract: | This paper introduces a flexible framework for the estimation of the conditional tail index of heavy tailed distributions. In this framework, the tail index is computed from an auxiliary linear regression model that facilitates estimation and inference based on established econometric methods, such as ordinary least squares (OLS), least absolute deviations, or M-estimation. We show theoretically and via simulations that OLS provides interesting results. Our Monte Carlo results highlight the adequate finite sample properties of the OLS tail index estimator computed from the proposed new framework and contrast its behavior to that of tail index estimates obtained by maximum likelihood estimation of exponential regression models, which is one of the approaches currently in use in the literature. An empirical analysis of the impact of determinants of the conditional left- and right-tail indexes of commodities' return distributions highlights the empirical relevance of our proposed approach. The novel framework's flexibility allows for extensions and generalizations in various directions, empowering researchers and practitioners to straightforwardly explore a wide range of research questions. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.13531 |
By: | Julius Owusu |
Abstract: | Statistical inference of heterogeneous treatment effects (HTEs) across predefined subgroups is challenging when units interact because treatment effects may vary by pre-treatment variables, post-treatment exposure variables (that measure the exposure to other units' treatment statuses), or both. Thus, the conventional HTEs testing procedures may be invalid under interference. In this paper, I develop statistical methods to infer HTEs and disentangle the drivers of treatment effects heterogeneity in populations where units interact. Specifically, I incorporate clustered interference into the potential outcomes model and propose kernel-based test statistics for the null hypotheses of (i) no HTEs by treatment assignment (or post-treatment exposure variables) for all pre-treatment variables values and (ii) no HTEs by pre-treatment variables for all treatment assignment vectors. I recommend a multiple-testing algorithm to disentangle the source of heterogeneity in treatment effects. I prove the asymptotic properties of the proposed test statistics. Finally, I illustrate the application of the test procedures in an empirical setting using an experimental data set from a Chinese weather insurance program. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.00733 |
By: | Eugene Dettaa; Endong Wang |
Abstract: | This paper presents a Wald test for multi-horizon Granger causality within a high-dimensional sparse Vector Autoregression (VAR) framework. The null hypothesis focuses on the causal coefficients of interest in a local projection (LP) at a given horizon. Nevertheless, the post-double-selection method on LP may not be applicable in this context, as a sparse VAR model does not necessarily imply a sparse LP for horizon h>1. To validate the proposed test, we develop two types of de-biased estimators for the causal coefficients of interest, both relying on first-step machine learning estimators of the VAR slope parameters. The first estimator is derived from the Least Squares method, while the second is obtained through a two-stage approach that offers potential efficiency gains. We further derive heteroskedasticity- and autocorrelation-consistent (HAC) inference for each estimator. Additionally, we propose a robust inference method for the two-stage estimator, eliminating the need to correct for serial correlation in the projection residuals. Monte Carlo simulations show that the two-stage estimator with robust inference outperforms the Least Squares method in terms of the Wald test size, particularly for longer projection horizons. We apply our methodology to analyze the interconnectedness of policy-related economic uncertainty among a large set of countries in both the short and long run. Specifically, we construct a causal network to visualize how economic uncertainty spreads across countries over time. Our empirical findings reveal, among other insights, that in the short run (1 and 3 months), the U.S. influences China, while in the long run (9 and 12 months), China influences the U.S. Identifying these connections can help anticipate a country's potential vulnerabilities and propose proactive solutions to mitigate the transmission of economic uncertainty. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.04330 |
By: | Zhe Sun; Yundong Tu |
Abstract: | The modal factor model represents a new factor model for dimension reduction in high dimensional panel data. Unlike the approximate factor model that targets for the mean factors, it captures factors that influence the conditional mode of the distribution of the observables. Statistical inference is developed with the aid of mode estimation, where the modal factors and the loadings are estimated through maximizing a kernel-type objective function. An easy-to-implement alternating maximization algorithm is designed to obtain the estimators numerically. Two model selection criteria are further proposed to determine the number of factors. The asymptotic properties of the proposed estimators are established under some regularity conditions. Simulations demonstrate the nice finite sample performance of our proposed estimators, even in the presence of heavy-tailed and asymmetric idiosyncratic error distributions. Finally, the application to inflation forecasting illustrates the practical merits of modal factors. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.19287 |
By: | Castiel Chen Zhuang |
Abstract: | This paper introduces a novel approach that combines synthetic control with triple difference to address violations of the parallel trends assumption. While synthetic control has been widely applied to improve causal estimates in difference-in-differences (DID) frameworks, its use in triple-difference models has been underexplored. By transforming triple difference into a DID structure, this paper extends the applicability of synthetic control to a triple-difference framework, enabling more robust estimates when parallel trends are violated across multiple dimensions. The empirical example focuses on China's "4+7 Cities" Centralized Drug Procurement pilot program. Based on the proposed procedure for synthetic triple difference, I find that the program can promote pharmaceutical innovation in terms of the number of patent applications even based on the recommended clustered standard error. This method contributes to improving causal inference in policy evaluations and offers a valuable tool for researchers dealing with heterogeneous treatment effects across subgroups. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.12353 |
By: | Francesco Fusari (Newcastle University Business School); Joe Marlow (University of Surrey); Alessio Volpicella (University of Surrey) |
Abstract: | We study the Structural Vector Autoregressions (SVARs) that impose internal and external restrictions to set-identify the Forecast Error Variance Decomposition (FEVD). This object measures the importance of shocks for macroeconomic fluctuations and is therefore of first-order interest in business cycle analysis. We make the following contributions. First, we characterize the endpoints of the FEVD as the extreme eigenvalues of a symmetric reduced-form matrix. A consistent plug-in estimator naturally follows. Second, we use the perturbation theory to prove that the endpoints of the FEVD are differentiable. Third, we construct confidence intervals that are uniformly consistent in level and have asymptotic Bayesian interpretation. We also describe the conditions to derive uniformly consistent confidence intervals for impulse responses. A Monte-Carlo exercise demonstrates the approach properties in finite samples. An unconventional monetary policy application illustrates our toolkit.e of the cost of sovereign default, capturing the FDI activity of small firms better. |
JEL: | F13 F21 F34 |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:sur:surrec:0424 |
By: | Niklas Ahlgren; Alexander Back; Timo Ter\"asvirta |
Abstract: | It is common for long financial time series to exhibit gradual change in the unconditional volatility. We propose a new model that captures this type of nonstationarity in a parsimonious way. The model augments the volatility equation of a standard GARCH model by a deterministic time-varying intercept. It captures structural change that slowly affects the amplitude of a time series while keeping the short-run dynamics constant. We parameterize the intercept as a linear combination of logistic transition functions. We show that the model can be derived from a multiplicative decomposition of volatility and preserves the financial motivation of variance decomposition. We use the theory of locally stationary processes to show that the quasi maximum likelihood estimator (QMLE) of the parameters of the model is consistent and asymptotically normally distributed. We examine the quality of the asymptotic approximation in a small simulation study. An empirical application to Oracle Corporation stock returns demonstrates the usefulness of the model. We find that the persistence implied by the GARCH parameter estimates is reduced by including a time-varying intercept in the volatility equation. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.03239 |
By: | Leona Han Chen (Hunan University); Yijie Fei (Hunan University); Jun Yu (University of Macau) |
Abstract: | Modeling multivariate stochastic volatility (MSV) can pose significant challenges, particularly when both variances and covariances are time-varying. In this study, we tackle these complexities by introducing novel MSV models based on the generalized Fisher transformation (GFT) proposed by Archakov and Hansen (2021). Our model exhibits remarkable flexibility, ensuring the positive-definiteness of the variancecovariance matrix, and disentangling the driving forces of volatilities and correlations. To conduct Bayesian analysis of the models, we employ a Particle Gibbs Ancestor Sampling (PGAS) method, facilitating efficient Bayesian model comparisons. Furthermore, we extend our MSV model to cover leverage effects and incorporate realized measures. Our simulation studies demonstrate that the proposed method performs well for our GFT-based MSV model. Furthermore, empirical studies based on equity returns show that the MSV models outperform alternative specifications in both in-sample and outof-sample performances. |
Keywords: | Multivariate stochastic volatility; Dynamic correlation; Leverage effect; Particle filter; Markov chain Monte Carlo; Realized measures |
JEL: | G10 C53 C12 C32 C58 |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:boa:wpaper:202419 |
By: | Jin Seo Cho (Yonsei University); Peter C.B. Phillips (Yale University) |
Abstract: | In GMM estimation, it is well known that if the moment dimension grows with the sample size, the asymptotics of GMM differ from the standard finite dimensional case. The present work examines the asymptotic properties of infinite dimensional GMM estimation when the weight matrix is formed by inverting Brownian motion or Brownian bridge covariance kernels. These kernels arise in econometric work such as minimum Cram´er-von Mises distance estimation when testing distributional specification. The properties of GMM estimation are studied under different environments where the moment conditions converge to a smooth Gaussian or non-differentiable Gaussian process. Conditions are also developed for testing the validity of the moment conditions by means of a suitably constructed J-statistic. In case these conditions are invalid we propose another test called the U-test. As an empirical application of these infinite dimensional GMM procedures the evolution of cohort labor income inequality indices is studied using the Continuous Work History Sample database. The findings show that labor income inequality indices are maximized at early career years, implying that economic policies to reduce income inequality should be more effective when designed for workers at an early stage in their career cycles. |
Keywords: | Infinite-dimensional GMM estimation; Brownian motion kernel; Brownian bridge kernel; Gaussian process; Infinite-dimensional MCMD estimation; Labor income inequality. |
JEL: | C13 C18 C32 C55 D31 O15 P36 |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:yon:wpaper:2024rwp-232 |
By: | Yoann Morin |
Abstract: | The synthetic difference-in-differences method provides an efficient method to estimate a causal effect with a latent factor model. However, it relies on the use of panel data. This paper presents an adaptation of the synthetic difference-in-differences method for repeated cross-sectional data. The treatment is considered to be at the group level so that it is possible to aggregate data by group to compute the two types of synthetic difference-in-differences weights on these aggregated data. Then, I develop and compute a third type of weight that accounts for the different number of observations in each cross-section. Simulation results show that the performance of the synthetic difference-in-differences estimator is improved when using the third type of weights on repeated cross-sectional data. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.20199 |
By: | Roberto Fuentes M.; Irene Crimaldi; Armando Rungi |
Abstract: | Inspired by Jang et al. (2022), we propose a Granger causality-in-the-mean test for bivariate $k-$Markov stationary processes based on a recently introduced class of non-linear models, i.e., vine copula models. By means of a simulation study, we show that the proposed test improves on the statistical properties of the original test in Jang et al. (2022), constituting an excellent tool for testing Granger causality in the presence of non-linear dependence structures. Finally, we apply our test to study the pairwise relationships between energy consumption, GDP and investment in the U.S. and, notably, we find that Granger-causality runs two ways between GDP and energy consumption. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.15070 |
By: | Mathur, Maya B; Shpitser, Ilya; VanderWeele, Tyler J. |
Abstract: | Complete-case analysis (CCA) is often criticized on the belief that CCA is only valid if data are missing-completely-at-random (MCAR). Influential papers have thus recommended abandoning CCA in favor of methods that make a weaker missing-at-random (MAR) assumption. We argue for a different view: that CCA with principled covariate adjustment provides a valuable complement to MAR-based methods, such as multiple imputation. When estimating treatment effects, appropriate covariate control can, for some causal structures, eliminate bias in CCA. This can be true even when data are missing-not-at-random (MNAR) and when MAR-based methods are biased. We describe principles for choosing adjustment covariates for CCA, and we characterize the causal structures for which covariate adjustment does, or does not, eliminate bias. Even when CCA is biased, principled covariate adjustment will often reduce the bias of CCA, and this method will sometimes be less biased than MAR-based methods. When multiple imputation is used under a MAR assumption, adjusted CCA thus still constitutes an important sensitivity analysis. When conducted with the same attention to covariate control that epidemiologists already afford to confounding, adjusted CCA belongs in the suite of reasonable methods for missing data. There is thus good justification for resurrecting CCA as a principled method. |
Date: | 2024–09–24 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:f9jvz |
By: | Yinhao Wu; Ping He |
Abstract: | This paper explores the continuous-time limit of a class of Quasi Score-Driven (QSD) models that characterize volatility. As the sampling frequency increases and the time interval tends to zero, the model weakly converges to a continuous-time stochastic volatility model where the two Brownian motions are correlated, thereby capturing the leverage effect in the market. Subsequently, we identify that a necessary condition for non-degenerate correlation is that the distribution of driving innovations differs from that of computing score, and at least one being asymmetric. We then illustrate this with two typical examples. As an application, the QSD model is used as an approximation for correlated stochastic volatility diffusions and quasi maximum likelihood estimation is performed. Simulation results confirm the method's effectiveness, particularly in estimating the correlation coefficient. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.14734 |
By: | Lin-Tung Tsai |
Abstract: | Confounding events with correlated timing violate the parallel trends assumption in Difference-in-Differences (DiD) designs. I show that the standard staggered DiD estimator is biased in the presence of confounding events. Identification can be achieved with units not yet treated by either event as controls and a double DiD design using variation in treatment timing. I apply this method to examine the effect of states' staggered minimum wage raise on teen employment from 2010 to 2020. The Medicaid expansion under the ACA confounded the raises, leading to a spurious negative estimate. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.05184 |
By: | Vedant Vohra |
Abstract: | Economists are often interested in functions of multiple causal effects, a leading example of which is evaluating the cost-effectiveness of a government policy. In such settings, the benefits and costs might be captured by multiple causal effects and aggregated into a scalar measure of cost-effectiveness. Oftentimes, the microdata underlying these estimates is not accessible; only the published estimates and their corresponding standard errors are available for post-hoc analysis. We provide a method to conduct inference on functions of causal effects when the only information available is the point estimates and their corresponding standard errors. We apply our method to conduct inference on the Marginal Value of Public Funds (MVPF) for 8 different policies, and show that even in the absence of any microdata, it is possible to conduct valid and meaningful inference on the MVPF. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.00217 |
By: | Sukjin Han |
Abstract: | The instrumental variables (IVs) method is a leading empirical strategy for causal inference. Finding IVs is a heuristic and creative process, and justifying its validity (especially exclusion restrictions) is largely rhetorical. We propose using large language models (LLMs) to search for new IVs through narratives and counterfactual reasoning, similar to how a human researcher would. The stark difference, however, is that LLMs can accelerate this process exponentially and explore an extremely large search space. We demonstrate how to construct prompts to search for potentially valid IVs. We argue that multi-step prompting is useful and role-playing prompts are suitable for mimicking the endogenous decisions of economic agents. We apply our method to three well-known examples in economics: returns to schooling, production functions, and peer effects. We then extend our strategy to finding (i) control variables in regression and difference-in-differences and (ii) running variables in regression discontinuity designs. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.14202 |
By: | Ren, Chunhui; Allison, Paul |
Abstract: | A popular statistical approach in sociological research, the fixed-effects regression model is known for its ability to produce unbiased coefficients by adjusting for unobserved time-invariant individual heterogeneity. This ability, however, is contingent on an often-overlooked assumption that time-invariant variables must not have time-varying effects, which, otherwise, would interfere in the process of coefficient estimation, leading to misinterpretations of the findings. Demonstrating with case studies, we intend to explain and clarify two types of such misinterpretations: (1) time-invariant variables’ time-varying effects, when measured in the model, are mistaken as time-invariant variables’ unbiased coefficient estimates; (2) time-invariant variables’ time-varying effects, when unmeasured in the model, confound coefficient estimates for time-varying variables. |
Date: | 2024–09–18 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:t6ndu |
By: | Joshua C. C. Chan; Yaling Qi |
Abstract: | We consider Bayesian tensor vector autoregressions (TVARs) in which the VAR coefficients are arranged as a three-dimensional array or tensor, and this coefficient tensor is parameterized using a low-rank CP decomposition. We develop a family of TVARs using a general stochastic volatility specification, which includes a wide variety of commonly-used multivariate stochastic volatility and COVID-19 outlier-augmented models. In a forecasting exercise involving 40 US quarterly variables, we show that these TVARs outperform the standard Bayesian VAR with the Minnesota prior. The results also suggest that the parsimonious common stochastic volatility model tends to forecast better than the more flexible Cholesky stochastic volatility model. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.16132 |
By: | Xiaosai Liao; Xinjue Li; Qingliang Fan |
Abstract: | Different from existing literature on testing the macro-spanning hypothesis of bond risk premia, which only considers mean regressions, this paper investigates whether the yield curve represented by CP factor (Cochrane and Piazzesi, 2005) contains all available information about future bond returns in a predictive quantile regression with many other macroeconomic variables. In this study, we introduce the Trend in Debt Holding (TDH) as a novel predictor, testing it alongside established macro indicators such as Trend Inflation (TI) (Cieslak and Povala, 2015), and macro factors from Ludvigson and Ng (2009). A significant challenge in this study is the invalidity of traditional quantile model inference approaches, given the high persistence of many macro variables involved. Furthermore, the existing methods addressing this issue do not perform well in the marginal test with many highly persistent predictors. Thus, we suggest a robust inference approach, whose size and power performance are shown to be better than existing tests. Using data from 1980-2022, the macro-spanning hypothesis is strongly supported at center quantiles by the empirical finding that the CP factor has predictive power while all other macro variables have negligible predictive power in this case. On the other hand, the evidence against the macro-spanning hypothesis is found at tail quantiles, in which TDH has predictive power at right tail quantiles while TI has predictive power at both tails quantiles. Finally, we show the performance of in-sample and out-of-sample predictions implemented by the proposed method are better than existing methods. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.03557 |
By: | Niccolo Lomys (CSEF and Università degli Studi di Napoli Federico II); Lorenzo Magnolfi (Department of Economics, University of Wisconsin-Madison) |
Abstract: | We develop a method to recover primitives from data generated by artificial intelligence (AI) agents in strategic environments like online marketplaces and auctions. Building on the design of leading online learning AIs, we impose a regret-minimization property on behavior. Under this property, we show that time-average play converges to the set of Bayes coarse correlated equilibrium (BCCE) predictions. We develop an inferential procedure based on BCCE restrictions and convergence rates of regret-minimizing AIs. We apply the method to pricing data in an online marketplace for used electronics. We estimate sellers' cost distributions and find lower markups than in centralized platforms. |
Keywords: | AI Decision-Making; Empirical Games; Regret Minimization; Bayes (Coarse) Correlated Equilibrium; Partial Identification |
JEL: | C1 C5 C7 D4 D8 L1 L8 |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:net:wpaper:2405 |
By: | Tobias Fissler; Yannick Hoga |
Abstract: | This paper lays out a principled approach to compare copula forecasts via strictly consistent scores. We first establish the negative result that, in general, copulas fail to be elicitable, implying that copula predictions cannot sensibly be compared on their own. A notable exception is on Fr\'echet classes, that is, when the marginal distribution structure is given and fixed, in which case we give suitable scores for the copula forecast comparison. As a remedy for the general non-elicitability of copulas, we establish novel multi-objective scores for copula forecast along with marginal forecasts. They give rise to two-step tests of equal or superior predictive ability which admit attribution of the forecast ranking to the accuracy of the copulas or the marginals. Simulations show that our two-step tests work well in terms of size and power. We illustrate our new methodology via an empirical example using copula forecasts for international stock market indices. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.04165 |
By: | Luca Margaritella; Ovidijus Stauskas |
Abstract: | We provide the theoretical foundation for the recently proposed tests of equal forecast accuracy and encompassing by Pitarakis (2023a) and Pitarakis (2023b), when the competing forecast specification is that of a factor-augmented regression model, whose loadings are allowed to be homogeneously/heterogeneously weak. This should be of interest for practitioners, as at the moment there is no theory available to justify the use of these simple and powerful tests in such context. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.20415 |
By: | Buczak, Philip |
Abstract: | Predicting ordinal responses such as school grades or rating scale data is a common task in the social and life sciences. Currently, two major streams of methodology exist for ordinal prediction: parametric models such as the proportional odds model and machine learning (ML) methods such as random forest (RF) adapted to ordinal prediction. While methods from the latter stream have displayed high predictive performance, particularly for data characterized by non-linear effects, most of these methods do not support hierarchical data. As such data structures frequently occur in the social and life sciences, e.g., students nested in classes or individual measurements nested within the same person, accounting for hierarchical data is of importance for prediction in these fields. A recently proposed ML method for ordinal prediction displaying promising results for non-hierarchical data is Frequency-Adjusted Borders Ordinal Forest (fabOF). Building on an iterative expectation-maximization-type estimation procedure, I extend fabOF to hierarchical data settings in this work by proposing Mixed-Effects Frequency-Adjusted Borders Ordinal Forest (mixfabOF). Through simulation and a real data example on math achievement, I will demonstrate that mixfabOF can improve upon fabOF and other RF-based ordinal prediction methods for (non-)hierarchical data in the presence of random effects. |
Date: | 2024–10–03 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:ny6we |
By: | Zeda Xu; John Liechty; Sebastian Benthall; Nicholas Skar-Gislinge; Christopher McComb |
Abstract: | Volatility, which indicates the dispersion of returns, is a crucial measure of risk and is hence used extensively for pricing and discriminating between different financial investments. As a result, accurate volatility prediction receives extensive attention. The Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model and its succeeding variants are well established models for stock volatility forecasting. More recently, deep learning models have gained popularity in volatility prediction as they demonstrated promising accuracy in certain time series prediction tasks. Inspired by Physics-Informed Neural Networks (PINN), we constructed a new, hybrid Deep Learning model that combines the strengths of GARCH with the flexibility of a Long Short-Term Memory (LSTM) Deep Neural Network (DNN), thus capturing and forecasting market volatility more accurately than either class of models are capable of on their own. We refer to this novel model as a GARCH-Informed Neural Network (GINN). When compared to other time series models, GINN showed superior out-of-sample prediction performance in terms of the Coefficient of Determination ($R^2$), Mean Squared Error (MSE), and Mean Absolute Error (MAE). |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.00288 |