|
on Econometrics |
By: | Zhishui Hu (University of Science and Technology of China); Nan Liu (Xiamen University); Peter C. B. Phillips (Cowles Foundation, Yale University, University of Auckland); Qiying Wang (University of Sydney) |
Abstract: | A new self-weighted least squares (LS) estimation theory is developed for local unit root (LUR) autoregression with heteroskedasticity. The proposed estimator has a mixed Gaussian limit distribution and the corresponding studentized statistic converges to a standard normal distribution free of the unknown localizing coefficient which is not consistently estimable. The estimator is super consistent with a convergence rate slightly below the OP (n) rate of LS estimation. The asymptotic theory relies on a new framework of convergence to the local time of a Gaussian process, allowing for the sample moments generated from martingales and many other integrated dependent sequences. A new unit root (UR) test in augmented autoregression is developed using self-weighted estimation and the methods are employed in predictive regression, providing an alternative approach to IVX regression. Simulation results showing good finite sample performance of these methods are reported together with a small empirical application. |
Keywords: | Self-weighted least squares estimation, autoregression, super consistency, limit distribution, unit root test, predictive regression. |
JEL: | C13 C22 |
Date: | 2024–04 |
URL: | https://d.repec.org/n?u=RePEc:cwl:cwldpp:2400 |
By: | Ying Wang (Renmin University of China); Peter C. B. Phillips (Cowles Foundation, Yale University, University of Auckland); Yundong Tu (Peking University) |
Abstract: | Functional coefficient (FC) cointegrating regressions offer empirical investigators flexibility in modeling economic relationships by introducing covariates that influence the direction and intensity of comovement among nonstationary time series. FC regression models are also useful when formal cointegration is absent, in the sense that the equation errors may themselves be nonstationary, but where the nonstationary series display well-defined FC linkages that can be meaningfully interpreted as correlation measures involving the covariates. The present paper proposes new nonparametric estimators for such FC regression models where the nonstationary series display linkages that enable consistent estimation of the correlation measures between them. Specifically, we develop Ãn-consistent estimators for the functional coefficient and establish their asymptotic distributions, which involve mixed normal limits that facilitate inference. Two novel features that appear in the limit theory are (i) the need for non- diagonal matrix normalization due to the presence of stationary and nonstationary components in the regression; and (ii) random bias elements that appear in the asymptotic distribution of the kernel estimators, again resulting from the nonstationary regression components. Numerical studies reveal that the proposed estimators achieve significant efficiency improvements compared to the estimators suggested in earlier work by Sun et al. (2011). Easily implementable specification tests with standard chi-square asymptotics are suggested to check for constancy of the functional coefficient. These tests are shown to have faster divergence rate under local alternatives and enjoy superior performance in simulations than tests proposed recently in Gan et al. (2014). An empirical application based on the quantity theory of money illustrates the practical use of correlated but non-cointegrated regression relations. |
Keywords: | Cointegration; Correlation measure; Functional coefficient regression; Marginal integration; Nonstationary time series. |
JEL: | C14 C22 |
Date: | 2024–04 |
URL: | https://d.repec.org/n?u=RePEc:cwl:cwldpp:2399 |
By: | Chen, Yining; S. Torrent, Hudson; A. Ziegelmann, Flavio |
Abstract: | We propose a robust methodology for estimating production frontiers with multi-dimensional input via a two-step nonparametric regression, in which we estimate the level and shape of the frontier before shifting it to an appropriate position. Our main contribution is to derive a novel frontier estimation method under a variety of flexible models which is robust to the presence of outliers and possesses some inherent advantages over traditional frontier estimators. Our approach may be viewed as a simplification, yet a generalization, of those proposed by Martins-Filho and coauthors, who estimate frontier surfaces in three steps. In particular, outliers, as well as commonly seen shape constraints of the frontier surfaces, such as concavity and monotonicity, can be straightforwardly handled by our estimation procedure. We show consistency and asymptotic distributional theory of our resulting estimators under standard assumptions in the multi-dimensional input setting. The competitive finite-sample performances of our estimators are highlighted in both simulation studies and empirical data analysis. |
Keywords: | concavity; local polynomial smoothing; monotonicity; outlier detection; shape-constrained regression; Concavity |
JEL: | C14 C20 |
Date: | 2023–07–03 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:119389 |
By: | Gao, Fengnan; Wang, Tengyao |
Abstract: | We introduce a new method for two-sample testing of high-dimensional linear regression coefficients without assuming that those coefficients are individually estimable. The procedure works by first projecting the matrices of covariates and response vectors along directions that are complementary in sign in a subset of the coordinates, a process which we call ‘complementary sketching’. The resulting projected covariates and responses are aggregated to form two test statistics, which are shown to have essentially optimal asymptotic power under a Gaussian design when the difference between the two regression coefficients is sparse and dense respectively. Simulations confirm that our methods perform well in a broad class of settings and an application to a large single-cell RNA sequencing dataset demonstrates its utility in the real world. |
Keywords: | high-dimensional data; linear model; minimax detection; sparsity; two-sample hypothesis testing |
JEL: | C1 |
Date: | 2022–10–01 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:115644 |
By: | Ying Wang (Renmin University of China); Peter C. B. Phillips (Cowles Foundation, Yale University, University of Auckland) |
Abstract: | Limit theory for functional coefficient cointegrating regression was recently found to be considerably more complex than earlier understood. The issues were explained and correct limit theory derived for the kernel weighted local constant estimator in Phillips and Wang (2023b). The present paper provides complete limit theory for the general kernel weighted local p-th order polynomial estimator of the functional coefficient and the coefficient deriva-tives. Both stationary and nonstationary regressors are allowed. Implications for bandwidth selection are discussed. An adaptive procedure to select the fit order p is proposed and found to work well. A robust t-ratio is constructed following the new correct limit theory, which corrects and improves the usual t-ratio in the literature. Furthermore, the robust t-ratio is valid and works well regardless of the properties of the regressors, thereby providing a unified procedure to compute the t-ratio and facilitating practical inference. Testing constancy of the functional coefficient is also considered. Supportive finite sample studies are provided that corroborate the new asymptotic theory. |
Keywords: | bandwidth selection, functional-coefficient cointegration, local p-th order polyno-mial approximation, robust t-ratio |
JEL: | C14 C22 |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:cwl:cwldpp:2398 |
By: | Tobias Eibinger (University of Graz, Austria); Beate Deixelberger (University of Graz, Austria); Hans Manner (University of Graz, Austria) |
Abstract: | This paper addresses econometric challenges arising in panel data analyses related to IPAT (environmental Impact of Population, Affluence and Technology) models and other applications typically characterized by a large-N and large-T structure. This poses specific econometric complexities due to nonstationarity and cross-sectional error correlation, potentially affecting consistent estimation and valid inference. We provide a concise overview of these complications and how to deal with these with appropriate tests and models. Moreover, we apply these insights to empirical examples based on the IPAT identity, offering insights into the robustness of previous findings. Our results suggest that using standard panel techniques can lead to biased estimates, incorrect inference, and invalid model adequacy tests. This can potentially lead to flawed policy conclusions. We provide practical guidance to practitioners for navigating these econometric issues. |
Keywords: | IPAT models, Nonstationary panel data, Cross-sectional dependence, Panel cointegration, GHG emissions, Common correlated effects. |
JEL: | C18 C33 Q54 R49 |
Date: | 2024–01 |
URL: | https://d.repec.org/n?u=RePEc:grz:wpaper:2024-01 |
By: | Yujuan Qiu |
Abstract: | This thesis evaluates most of the extreme mixture models and methods that have appended in the literature and implements them in the context of finance and insurance. The paper also reviews and studies extreme value theory, time series, volatility clustering, and risk measurement methods in detail. Comparing the performance of extreme mixture models and methods on different simulated distributions shows that the method based on kernel density estimation does not have an absolute superior or close to the best performance, especially for the estimation of the extreme upper or lower tail of the distribution. Preprocessing time series data using a generalized autoregressive conditional heteroskedasticity model (GARCH) and applying extreme value mixture models on extracted residuals from GARCH can improve the goodness of fit and the estimation of the tail distribution. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.05933 |
By: | Vainora, J. |
Abstract: | This paper develops an asymptotic theory for network data based on the concept of network stationarity, explicitly linking network topology with the dependence between network entities. Each pair of entities is assigned a class based on a bivariate graph statistic. Network stationarity assumes that conditional covariances depend only on the assigned class. The asymptotic theory, developed for a growing network, includes laws of large numbers, consistent autocovariance function estimation, and a central limit theorem. A significant portion of the assumptions concerns random graph regularity conditions, particularly those related to class sizes. Weak dependence assumptions use conditional α-mixing adapted to networks. The proposed framework is illustrated through an application tαααo microfinance data from Indian villages. |
Keywords: | Network Dependence, Covariance, Random Graphs, Mixing, Robust Inference |
JEL: | C10 C18 C31 C55 D85 |
Date: | 2024–07–04 |
URL: | https://d.repec.org/n?u=RePEc:cam:camdae:2439 |
By: | Martin Bruns (School of Economics, University of East Anglia); Helmut Lütkepohl (DIW Berlin and Freie Universität Berlin); James McNeil (Dalhousie University) |
Abstract: | The shocks in structural vector autoregressive (VAR) analysis are typically assumed to be instantaneously uncorrelated. This condition may easily be violated in proxy VAR models if more than one shock is identified by a proxy variable. Correlated shocks may be obtained even if the proxies are uncorrelated and satisfy the usual relevance and exogeneity conditions individually. Examples from the recent proxy VAR literature are presented. It is shown that assuming uncorrelated proxies that satisfy the usual relevance and exogeneity conditions individually actually over-identifies the shocks of interest and a Generalized Method of Moments (GMM) algorithm is proposed that ensures orthogonal shocks and provides efficient estimators of the structural parameters. It generalizes an earlier GMM proposal that works only if at least K − 1 shocks are identified by proxies in a VAR with K variables. |
Keywords: | Structural vector autoregression, proxy VAR, external instruments, correlated shocks, Generalized Method of Moments |
JEL: | C32 C36 E52 |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:uea:ueaeco:2024-05 |
By: | Bogatyrev, Konstantin (Bocconi University); Stoetzer, Lukas |
Abstract: | Synthetic control methods are extensively utilized in political science for estimating counterfactual outcomes in case studies and difference-in-differences settings, often applied to model counterfactual proportional data. However, the conventional synthetic control methods are designed for univariate outcomes, leading researchers to model counterfactuals for each proportion separately. This paper introduces an extension, proposing a method to simultaneously handle multivariate proportional outcomes. Our approach establishes constant control comparisons by using the same weights for each proportion, improving comparability while adhering to treatment constraints. Results from a simulation study and the application of our method to data from a recently published article on campaign effects in the 2019 Spanish general election underscore the benefits of accounting for the interplay of proportional outcomes. This advancement extends the validity and reliability of synthetic control estimates to common outcomes in political science. |
Date: | 2024–07–12 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:brhd3 |
By: | Suguru Otani; Tohya Sugano |
Abstract: | We highlight that match fixed effects, represented by the coefficients of interaction terms involving dummy variables for two elements, lack identification without specific restrictions on parameters. Consequently, the coefficients typically reported as relative match fixed effects by statistical software are not interpretable. To address this, we establish normalization conditions that enable identification of match fixed effect parameters as interpretable indicators of unobserved match affinity, facilitating comparisons among observed matches. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.18913 |
By: | Lux, Thomas |
Abstract: | Identifiability of the parameters is an important precondition for consistent estimation of models designed to describe empirical phenomena. Nevertheless, many estimation exercises proceed without a preliminary investigation into the identifiability of its models. As a consequence, the estimates could be essentially meaningless if convergence to the true parameters is not guaranteed in the pertinent problem. We provide some evidence here that such a lack of identification is responsible for the inconclusive results reported in recent literature on parameter estimates for a certain class of nonlinear behavioral New Keynesian models. We also show that identifiability depends on the subtle details of the model structure. Hence, a careful investigation of identifiability should preceed any attempt at estimation of such models. |
Keywords: | Behavioral macro, identification, forecast heuristics |
JEL: | C53 E12 E32 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:zbw:cauewp:300523 |
By: | Christian Gourieroux; Quinlan Lee |
Abstract: | We introduce a class of relative error decomposition measures that are well-suited for the analysis of shocks in nonlinear dynamic models. They include the Forecast Relative Error Decomposition (FRED), Forecast Error Kullback Decomposition (FEKD) and Forecast Error Laplace Decomposition (FELD). These measures are favourable over the traditional Forecast Error Variance Decomposition (FEVD) because they account for nonlinear dependence in both a serial and cross-sectional sense. This is illustrated by applications to dynamic models for qualitative data, count data, stochastic volatility and cyberrisk. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.17708 |
By: | Fitzgerald, Jack |
Abstract: | Researchers utilizing regression discontinuity design (RDD) commonly test for running variable (RV) manipulation around a cutoff, but incorrectly assert that insignificant manipulation test statistics are evidence of negligible manipulation. I introduce simple frequentist equivalence testing procedures that can provide statistically significant evidence that RV manipulation around a cutoff is practically equal to zero. I then demonstrate the necessity of these procedures, leveraging replication data from 36 RDD publications to conduct 45 equivalence-based RV manipulation tests. Over 44% of RV density discontinuities at the cutoff cannot be significantly bounded beneath a 50% upward jump. Bounding equivalence-based manipulation test failure rates beneath 5% requires arguing that a 350% upward density jump is practically equal to zero. Meta-analytic estimates reveal that average RV manipulation around the cutoff is equivalent to a 26% upward density jump. These results imply that many published RDD estimates may be confounded by discontinuities in potential outcomes due to RV manipulation that remains undetectable by existing tests. I provide research guidelines and commands in Stata and R to help researchers conduct more credible equivalencebased manipulation testing in future RDD research. |
Keywords: | McCrary density test, rddensity, DCdensity, Hartman test |
JEL: | C12 C18 C87 P00 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:zbw:i4rdps:136 |
By: | Paolo Maranzano (Department of Economics, Management and Statistics, University of Milano-Bicocca and Fondazione Eni Enrico Mattei); Matteo Pelagatti (Department of Economics, Management and Statistics, University of Milano-Bicocca) |
Abstract: | The Hodrick-Prescott filter is a popular tool in macroeconomics for decomposing a time series into a smooth trend and a business cycle component. The last few years have witnessed global events, such as the Global Financial Crisis, the COVID-19 pandemic, and the war in Ukraine, that have had abrupt structural impacts on many economic time series. Moreover, new regulations and policy changes generally lead to similar behaviours. Thus, those events should be absorbed by the trend component of the trend-cycle decomposition, but the Hodrick-Prescott filter does not allow for jumps. We propose a modification of the Hodrick-Prescott filter that contemplates jumps and automatically selects the time points in which the jumps occur. We provide an efficient implementation of the new filter in an R package. We use our modified filter to assess what Italian labour market reforms impacted employment in different age groups. |
Keywords: | Trend, State-space form, Unobserved component model, Structural change, LASSO, Business cycle, Employment |
JEL: | C22 C63 E32 J21 |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:fem:femwpa:2024.18 |
By: | Ermisch, John |
Abstract: | Estimation of relationships between a dependent variable constructed by the aggregation of individual behaviour and aggregate independent variables such as mean income is common. The aim and contribution of the paper is to clarify when and how parameter estimates based on aggregates leads to bias and the likely degree of such bias. It demonstrates that use of aggregate data to estimate parameters associated with a model of individual behaviour when the outcome variable is binary (e.g. a birth) is not advisable. It only ‘works’ when the independent variables do not vary at the individual level (e.g. prices or the unemployment rate). Even then it requires prior distributional knowledge or assumptions. When the individual model also contains variables that vary across individuals, then the analysis in the paper suggests that all parameter estimates based solely on variation in the aggregates usually understate the size of their true value, even ones associated with variables which do not vary over individuals. Indeed, it is often the case that the 95% confidence interval of these latter parameter estimates never contains the parameter’s true value. |
Date: | 2024–07–05 |
URL: | https://d.repec.org/n?u=RePEc:osf:socarx:3hrkp |