|
on Econometrics |
By: | Guohua Feng; Jiti Gao; Fei Liu; Bin Peng |
Abstract: | In this paper, we develop estimation and inferential methods for threedimensional (3D) panel data models with homogeneous/heterogeneous coefficients. Our 3D panel data models specify the nature of common shocks through the use of a hierarchical factor structure (i.e., global factors and sector factors). Accordingly, we develop an approach to estimating the hierarchy, thus enabling us to have a better understanding of the relative importance of the two types of unobservable shocks. Second, we propose bias corrected estimators, and give bootstrap procedures to construct the confidence intervals for the parameters of interest while allowing for correlation along three dimensions of idiosyncratic errors. We justify the theoretical findings using extensive simulations. In an empirical study, we examine the twin hypotheses of conditional and unconditional-convergence for manufacturing industries across countries. |
Keywords: | asymptotic theory, bias correction, dependent wild bootstrap, hierarchical model |
JEL: | C23 O10 L60 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2023-20&r=ecm |
By: | Peter Reinhard Hansen; Yiyao Luo |
Abstract: | Time-varying volatility is an inherent feature of most economic time-series, which causes standard correlation estimators to be inconsistent. The quadrant correlation estimator is consistent but very inefficient. We propose a novel subsampled quadrant estimator that improves efficiency while preserving consistency and robustness. This estimator is particularly well-suited for high-frequency financial data and we apply it to a large panel of US stocks. Our empirical analysis sheds new light on intra-day fluctuations in market betas by decomposing them into time-varying correlations and relative volatility changes. Our results show that intraday variation in betas is primarily driven by intraday variation in correlations. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.19992&r=ecm |
By: | Christis Katsouris |
Abstract: | This survey study discusses main aspects to optimal estimation methodologies for panel data regression models. In particular, we present current methodological developments for modeling stationary panel data as well as robust methods for estimation and inference in nonstationary panel data regression models. Some applications from the network econometrics and high dimensional statistics literature are also discussed within a stationary time series environment. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.03471&r=ecm |
By: | Alain Hecq; Daniel Velasquez-Gaviria |
Abstract: | This paper introduces new techniques for estimating, identifying and simulating mixed causal-noncausal invertible-noninvertible models. We propose a framework that integrates high-order cumulants, merging both the spectrum and bispectrum into a single estimation function. The model that most adequately represents the data under the assumption that the error term is i.i.d. is selected. Our Monte Carlo study reveals unbiased parameter estimates and a high frequency with which correct models are identified. We illustrate our strategy through an empirical analysis of returns from 24 Fama-French emerging market stock portfolios. The findings suggest that each portfolio displays noncausal dynamics, producing white noise residuals devoid of conditional heteroscedastic effects. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.19543&r=ecm |
By: | Scott Alan Carson; Wael M. Al-Sawai; Scott A. Carson |
Abstract: | Regression model error assumptions are essential to estimator properties. Least squares model parameters are consistent and efficient when the underlying error terms are normally distributed but yield inefficient estimators when errors are not normally distributed. Partially adaptive and M-estimation are alternatives to least squares when regression model errors are not normally distributed. Vertically Integrated firms in the oil and gas industry is one industrial sector where error mis-specification is consequential. Equity returns are a common area where returns are not normally distributed, and inappropriate error distribution specification has substantive effect when estimating capital costs. Vertically Integrated Major equity returns and accompanying regression model error terms are not normally distributed, and this study considers error returns for Integrated oil and gas producers. Vertically Integrated firm returns and their regression model error are not normally distributed, and alternative estimators to least squares have desirable properties. |
Keywords: | partially adaptive regression models, oil and gas industry, Integrated Majors, vertical integration |
JEL: | G12 L71 L72 Q40 Q41 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_10733&r=ecm |
By: | Russell Davidson; Andrea Monticini (Università Cattolica del Sacro Cuore; Dipartimento di Economia e Finanza, Università Cattolica del Sacro Cuore) |
Abstract: | The aim of this paper is to illustrate more than one instance of poor bootstrap performance, and to see how available diagnostic techniques can indicate reliably when and how this poor performance can arise. Two particular features that seem to be important to explain bootstrap discrepancy are illustrated by some Monte Carlo experiments. |
Keywords: | Bootstrap inference, fast double bootstrap, conditional fast double bootstrap, heteroskedasticity. |
JEL: | C12 C22 C32 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:ctc:serie1:def130&r=ecm |
By: | Mark Kattenberg (CPB Netherlands Bureau for Economic Policy Analysis); Bas Scheer (CPB Netherlands Bureau for Economic Policy Analysis); Jurre Thiel (CPB Netherlands Bureau for Economic Policy Analysis) |
Abstract: | Recently developed heterogeneity-robust two-way fixed effects (TWFE) estimators do not quantify the full heterogeneity in treatment effects in a difference-in-differences research design. We therefore present a computationally feasible algorithm to estimate heterogeneous treatment effects in the presence of many fixed effects using causal forests. Our modification identifies treatment effects by partialling out fixed effect using group averages. Simulation results suggest that our algorithm provides consistent estimates of the Conditional Average Treatment effect for the Treated in a (staggered) difference-in-differences research design. Finally, we use our method to document heterogeneity in the treatment effect of alternative work arrangements (payrolling) on hourly wages. We find evidence that wages fell by 3.7 percent in the first year of payrolling for a specific subgroup of workers only. Both conclusions did not appear in a conventional heterogeneity analysis using manual subgroups. The R-code of our algorithm is publicly available online. |
JEL: | C18 C23 C88 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:cpb:discus:452&r=ecm |
By: | Daido Kido |
Abstract: | This study investigates the problem of individualizing treatment allocations using stated preferences for treatments. If individuals know in advance how the assignment will be individualized based on their stated preferences, they may state false preferences. We derive an individualized treatment rule (ITR) that maximizes welfare when individuals strategically state their preferences. We also show that the optimal ITR is strategy-proof, that is, individuals do not have a strong incentive to lie even if they know the optimal ITR a priori. Constructing the optimal ITR requires information on the distribution of true preferences and the average treatment effect conditioned on true preferences. In practice, the information must be identified and estimated from the data. As true preferences are hidden information, the identification is not straightforward. We discuss two experimental designs that allow the identification: strictly strategy-proof randomized controlled trials and doubly randomized preference trials. Under the presumption that data comes from one of these experiments, we develop data-dependent procedures for determining ITR, that is, statistical treatment rules (STRs). The maximum regret of the proposed STRs converges to zero at a rate of the square root of the sample size. An empirical application demonstrates our proposed STRs. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.08963&r=ecm |
By: | Michael P. Leung |
Abstract: | Cluster-randomized trials often involve units that are irregularly distributed in space without well-separated communities. In these settings, cluster construction is a critical aspect of the design due to the potential for cross-cluster interference. The existing literature relies on partial interference models, which take clusters as given and assume no cross-cluster interference. We relax this assumption by allowing interference to decay with geographic distance between units. This induces a bias-variance trade-off: constructing fewer, larger clusters reduces bias due to interference but increases variance. We propose new estimators that exclude units most potentially impacted by cross-cluster interference and show that this substantially reduces asymptotic bias relative to conventional difference-in-means estimators. We then study the design of clusters to optimize the estimators' rates of convergence. We provide formal justification for a new design that chooses the number of clusters to balance the asymptotic bias and variance of our estimators and uses unsupervised learning to automate cluster construction. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.18836&r=ecm |
By: | Zuckerman, Daniel |
Abstract: | Bayesian inference (BI) has long been used to estimate posterior parameter distributions based on multiple sets of data. Here, we present elementary derivations of two strategies for doing so. The first approach employs the posterior distribution from a BI calculation on one dataset as the prior distribution for a second dataset. The second approach uses importance sampling, augmenting the posterior from one dataset to form a well-targeted sampling function for the second. In both cases, the distribution sampled is shown to be the ``full posterior'' as if BI were performed on the two datasets together, subject to only mild assumptions. Both methods can be applied in sequence to multiple datasets. |
Date: | 2023–11–16 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:hv7yd&r=ecm |
By: | Rüttenauer, Tobias |
Abstract: | This handbook chapter provides an essential introduction to the field of spatial econometrics, offering a comprehensive overview of techniques and methodologies for analysing spatial data in the social sciences. Spatial econometrics addresses the unique challenges posed by spatially dependent observations, where spatial relationships among data points can significantly impact statistical analyses. The chapter begins by exploring the fundamental concepts of spatial dependence and spatial autocorrelation, and highlighting their implications for traditional econometric models. It then introduces a range of spatial econometric models, particularly spatial lag, spatial error, and spatial lag of X models, illustrating how these models accommodate spatial relationships and yield accurate and insightful results about the underlying spatial processes. The chapter provides an intuitive understanding of these models compare to each other. A practical example on London house prices demonstrates the application of spatial econometrics, emphasising its relevance in uncovering hidden spatial patterns, addressing endogeneity, and providing robust estimates in the presence of spatial dependence. |
Date: | 2023–11–16 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:mq7te&r=ecm |
By: | Annika Camehl (Erasmus University Rotterdam); Tomasz Wo\'zniak (University of Melbourne) |
Abstract: | We propose a new Bayesian heteroskedastic Markov-switching structural vector autoregression with data-driven time-varying identification. The model selects alternative exclusion restrictions over time and, as a condition for the search, allows to verify identification through heteroskedasticity within each regime. Based on four alternative monetary policy rules, we show that a monthly six-variable system supports time variation in US monetary policy shock identification. In the sample-dominating first regime, systematic monetary policy follows a Taylor rule extended by the term spread and is effective in curbing inflation. In the second regime, occurring after 2000 and gaining more persistence after the global financial and COVID crises, the Fed acts according to a money-augmented Taylor rule. This regime's unconventional monetary policy provides economic stimulus, features the liquidity effect, and is complemented by a pure term spread shock. Absent the specific monetary policy of the second regime, inflation would be over one percentage point higher on average after 2008. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.05883&r=ecm |
By: | Zhao, T.;; Sutton, M.;; Meacock, M.; |
Abstract: | Compositional variables such as proportions by age group are commonly included as covariates in aggregate-level health research. Since these proportions sum to one and only contain relative information, directly including them as covariates violates the fundamental assumptions made in linear regression analysis. We explain the compositional nature of such data and, using practice-level elective admissions rates in England as an example outcome variable, demonstrate the consequences of directly using proportions in regressions. We also provide an overview of compositional data analysis (CoDA) techniques with a focus on isometric log-ratio (ILR) transformation. Applying ILR to our example data shows that the regression results can differ significantly from those obtained using raw proportions. Health economists should apply appropriate CoDA methods when using compositional data in their research. |
Keywords: | compositional data; CoDA; age group proportion; isometric log-ratio; |
JEL: | C18 C13 I10 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:yor:hectdg:23/16&r=ecm |
By: | Nagengast, Arne (Deutsche Bundesbank); Yotov, Yoto (Drexel University) |
Abstract: | We nest an extended two-way fixed effect (ETWFE) estimator for staggered difference-in-differences within the structural gravity model. To test the ETWFE, we estimate the effects of regional trade agreements (RTAs). The results suggest that RTA estimates in the current gravity literature may be biased downward (by more than 50% in our sample). Sensitivity analyses confirm the robustness of our main findings and demonstrate the applicability of our methods in different settings. We expect the ETWFE methods to have significant implications for the estimates of other policy variables in the trade literature and for gravity regressions on migration and FDI flows. |
Keywords: | Staggered Difference-in-Differences; Gravity Model; Trade Agreements |
JEL: | C13 C23 F10 F13 F14 |
Date: | 2023–11–06 |
URL: | http://d.repec.org/n?u=RePEc:ris:drxlwp:2023_006&r=ecm |
By: | Chen, Ying; Grith, Maria; Lai, Hannah L. H. |
Abstract: | Implied volatility (IV) forecasting is inherently challenging due to its high dimensionality across various moneyness and maturity, and nonlinearity in both spatial and temporal aspects. We utilize implied volatility surfaces (IVS) to represent comprehensive spatial dependence and model the nonlinear temporal dependencies within a series of IVS. Leveraging advanced kernel-based machine learning techniques, we introduce the functional Neural Tangent Kernel (fNTK) estimator within the Nonlinear Functional Autoregression framework, specifically tailored to capture intricate relationships within implied volatilities. We establish the connection between fNTK and kernel regression, emphasizing its role in contemporary nonparametric statistical modeling. Empirically, we analyze S&P 500 Index options from January 2009 to December 2021, encompassing more than 6 million European calls and puts, thereby showcasing the superior forecast accuracy of fNTK.We demonstrate the significant economic value of having an accurate implied volatility forecaster within trading strategies. Notably, short delta-neutral straddle trading, supported by fNTK, achieves a Sharpe ratio ranging from 1.45 to 2.02, resulting in a relative enhancement in trading outcomes ranging from 77% to 583%. |
Keywords: | Implied Volatility Surfaces; Neural Networks; Neural Tangent Kernel; Implied Volatility Forecasting; Nonlinear Functional Autoregression; Option Trading Strategies |
JEL: | C14 C45 C58 G11 G13 G17 |
Date: | 2023–10–24 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:119022&r=ecm |