
on Econometrics 
By:  Susana CamposMartins (University of Oxford, University of Minho and NIPE); Cristina Amado (University of Minho and NIPE, CREATES and Aarhus University) 
Abstract:  In this paper, we propose an additive timevarying (or partially timevarying) multivariate model of volatility, where a timedependent component is added to the extended vector GARCH process for modelling the dynamics of volatility interactions. In our framework, codependence in volatility is allowed to change smoothly between two extreme states and secondmoment interdependence is identified from these crisiscontingent strucural changes. The estimation of the new timevarying vector GARCH process is simplified using an equationbyequation estimator for the volatility equations in the first step, and estimating the correlation matrix in the second step. A new Lagrange multiplier test is derived for testing the null hypothesis of constancy codependence volatility against a smoothly timevarying interdependence between financial markets. The test appears to be a useful statistical tool for evaluating the adequacy of GARCH equations by testing the presence of significant changes in crossmarket volatility transmissions. Monte Carlo simulation experiments show that the test statistic has satisfactory empirical properties in finite samples. An application to sovereign bond yield returns illustrates the modelling strategy of the new specification. 
Keywords:  Multivariate timevarying GARCH; Volatility spillovers; Timevariation;Lagrange multiplier test; Financial market interdependence. 
JEL:  C12 C13 C32 C51 G15 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:nip:nipewp:12/2021&r= 
By:  Lujia Bai; Weichi Wu 
Abstract:  We consider the problem of testing for longrange dependence for timevarying coefficient regression models. The covariates and errors are assumed to be locally stationary, which allows complex temporal dynamics and heteroscedasticity. We develop KPSS, R/S, V/S, and K/Stype statistics based on the nonparametric residuals, and propose bootstrap approaches equipped with a differencebased longrun covariance matrix estimator for practical implementation. Under the null hypothesis, the local alternatives as well as the fixed alternatives, we derive the limiting distributions of the test statistics, establish the uniform consistency of the differencebased longrun covariance estimator, and justify the bootstrap algorithms theoretically. In particular, the exact local asymptotic power of our testing procedure enjoys the order $O( \log^{1} n)$, the same as that of the classical KPSS test for long memory in strictly stationary series without covariates. We demonstrate the effectiveness of our tests by extensive simulation studies. The proposed tests are applied to a COVID19 dataset in favor of longrange dependence in the cumulative confirmed series of COVID19 in several countries, and to the Hong Kong circulatory and respiratory dataset, identifying a new type of 'spurious long memory'. 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2110.08089&r= 
By:  Jiti Gao; Bin Peng; Yayi Yan 
Abstract:  Multivariate dynamic models are widely used in practical studies providing a tractable way to capture evolving interrelationships among multivariate time series, but not many studies focus on inferences. Along this line, a key question is that whether some coefficients (if not all) evolve with time. To settle this issue, the paper develops a Waldtype test statistic for detecting timeinvariant parameters in a class of multivariate dynamic timevarying models. Since Gaussian/stationary approximation methods initially proposed for univariate time series settings are inapplicable to the setting under consideration in this paper, we develop an approximation method using a timevarying vector moving average infinity process. We show that the test statistic is asymptotically normal under both the null hypothesis and the local alternative. Simulation studies show that the proposed test has a desirable finite sample performance. 
Keywords:  multivariate time series, parameter instability, specification testing, timevarying coefficient 
JEL:  C12 C14 C32 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:202111&r= 
By:  Wei Wang; Xiaodong Yan; Yanyan Ren; Zhijie Xiao 
Abstract:  Heterogeneous panel data models that allow the coefficients to vary across individuals and/or change over time have received increasingly more attention in statistics and econometrics. This paper proposes a twodimensional heterogeneous panel regression model that incorporate a group structure of individual heterogeneous effects with cohort formation for their timevariations, which allows common coefficients between nonadjacent time points. A biintegrative procedure that detects the information regarding group and cohort patterns simultaneously via a doubly penalized least square with concave fused penalties is introduced. We use an alternating direction method of multipliers (ADMM) algorithm that automatically biintegrates the twodimensional heterogeneous panel data model pertaining to a common one. Consistency and asymptotic normality for the proposed estimators are developed. We show that the resulting estimators exhibit oracle properties, i.e., the proposed estimator is asymptotically equivalent to the oracle estimator obtained using the known group and cohort structures. Furthermore, the simulation studies provide supportive evidence that the proposed method has good finite sample performance. A real data empirical application has been provided to highlight the proposed method. 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2110.10480&r= 
By:  Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Georges Bresson (Department of Economics, Université Paris II, France); Anoop Chaturvedi (Department of Statistics, University of Allahabad, India); Guy Lacroix (Department D'économique, Université Laval, Québec, Canada) 
Abstract:  This paper extends the work of Baltagi et al. (2018) to the popular dynamic panel data model. We investigate the robustness of Bayesian panel data models to possible misspecification of the prior distribution. The proposed robust Bayesian approach departs from the standard Bayesian framework in two ways. First, we consider the εcontamination class of prior distributions for the model parameters as well as for the individual effects. Second, both the base elicited priors and the εcontamination priors use Zellner (1986)'s gpriors for the variancecovariance matrices. We propose a general "toolbox" for a wide range of specifications which includes the dynamic panel model with random effects, with cross correlated effects à la Chamberlain, for the HausmanTaylor world and for dynamic panel data models with homogeneous/heterogeneous slopes and crosssectional dependence. Using a Monte Carlo simulation study, we compare the finite sample properties of our proposed estimator to those of standard classical estimators. The paper contributes to the dynamic panel data literature by proposing a general robust Bayesian framework which encompasses the conventional frequentist specifications and their associated estimation methods as special cases. 
Keywords:  Dynamic Model, εContamination, gPriors, TypeII Maximum Likelihood Posterior Density, Panel Data, Robust Bayesian Estimator, TwoStage Hierarchy 
JEL:  C11 C23 C26 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:240&r= 
By:  JeanPierre Florens; Anna Simoni 
Abstract:  This paper studies the role played by identification in the Bayesian analysis of statistical and econometric models. First, for unidentified models we demonstrate that there are situations where the introduction of a nondegenerate prior distribution can make a parameter that is nonidentified in frequentist theory identified in Bayesian theory. In other situations, it is preferable to work with the unidentified model and construct a Markov Chain Monte Carlo (MCMC) algorithms for it instead of introducing identifying assumptions. Second, for partially identified models we demonstrate how to construct the prior and posterior distributions for the identified set parameter and how to conduct Bayesian analysis. Finally, for models that contain some parameters that are identified and others that are not we show that marginalizing out the identified parameter from the likelihood with respect to its conditional prior, given the nonidentified parameter, allows the data to be informative about the nonidentified and partially identified parameter. The paper provides examples and simulations that illustrate how to implement our techniques. 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2110.09954&r= 
By:  Haoge Chang; Joel Middleton; Peter Aronow 
Abstract:  In an influential critique of empirical practice, Freedman \cite{freedman2008A,freedman2008B} showed that the linear regression estimator was biased for the analysis of randomized controlled trials under the randomization model. Under Freedman's assumptions, we derive exact closedform bias corrections for the linear regression estimator with and without treatmentbycovariate interactions. We show that the limiting distribution of the bias corrected estimator is identical to the uncorrected estimator, implying that the asymptotic gains from adjustment can be attained without introducing any risk of bias. Taken together with results from Lin \cite{lin2013agnostic}, our results show that Freedman's theoretical arguments against the use of regression adjustment can be completely resolved with minor modifications to practice. 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2110.08425&r= 
By:  Dhruv Rohatgi; Vasilis Syrgkanis 
Abstract:  For many inference problems in statistics and econometrics, the unknown parameter is identified by a set of moment conditions. A generic method of solving moment conditions is the Generalized Method of Moments (GMM). However, classical GMM estimation is potentially very sensitive to outliers. Robustified GMM estimators have been developed in the past, but suffer from several drawbacks: computational intractability, poor dimensiondependence, and no quantitative recovery guarantees in the presence of a constant fraction of outliers. In this work, we develop the first computationally efficient GMM estimator (under intuitive assumptions) that can tolerate a constant $\epsilon$ fraction of adversarially corrupted samples, and that has an $\ell_2$ recovery guarantee of $O(\sqrt{\epsilon})$. To achieve this, we draw upon and extend a recent line of work on algorithmic robust statistics for related but simpler problems such as mean estimation, linear regression and stochastic optimization. As two examples of the generality of our algorithm, we show how our estimation algorithm and assumptions apply to instrumental variables linear and logistic regression. Moreover, we experimentally validate that our estimator outperforms classical IV regression and twostage Huber regression on synthetic and semisynthetic datasets with corruption. 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2110.03070&r= 
By:  Jad Beyhum 
Abstract:  This note develops a simple twostage least squares (2SLS) procedure to estimate the causal effect of some endogenous regressors on a randomly right censored outcome in the linear model. The proposal replaces the usual ordinary least squares regressions of the standard 2SLS by weighted least squares regressions. The weights correspond to the inverse probability of censoring. We show consistency and asymptotic normality of the estimator. The estimator exhibits good finite sample performances in simulations. 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2110.05107&r= 
By:  Joshua Angrist; Michal Koles\'ar 
Abstract:  Twostage least squares estimates in heavily overidentified instrumental variables (IV) models can be misleadingly close to the corresponding ordinary least squares (OLS) estimates when many instruments are weak. Justidentified (justID) IV estimates using a single instrument are also biased, but the importance of weakinstrument bias in justID IV applications remains contentious. We argue that in microeconometric applications, justID IV estimators can typically be treated as all but unbiased and that the usual inference strategies are likely to be adequate. The argument begins with contour plots for confidence interval coverage as a function of instrument strength and explanatory variable endogeneity. These show undercoverage in excess of 5\% only for endogeneity beyond that seen even when IV and OLS estimates differ by an order of magnitude. Three widelycited microeconometric applications are used to explain why endogeneity is likely low enough for IV estimates to be reliable. We then show that an estimator that's unbiased given a population firststage sign restriction has bias exceeding that of IV when the restriction is imposed on the data. But screening on the sign of the estimated first stage is shown to halve the median bias of conventional IV without reducing coverage. To the extent that signscreening is already part of empirical workflows, reported IV estimates enjoy the minimal bias of signscreened justID IV 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2110.10556&r= 
By:  Armin Pourkhanali; Jonathan Keith; Xibin Zhang 
Abstract:  This paper proposes using Chebyshev polynomials to approximate timevarying parameters of a GARCH model, where polynomial coefficients are estimated via numerical optimization using the function gradient descent method. We investigate the asymptotic properties of the estimates of polynomial coefficients and the subsequent estimate of conditional variance. Monte Carlo studies are conducted to examine the performance of the proposed polynomial approximation. With empirical studies of modelling daily returns of the US 30year Tbond daily closing price and daily returns of the gold futures closing price, we find that in terms of insample fitting and outofsample forecasting, our proposed timevarying model outperforms the constantparameter counterpart and a benchmark timevarying model. 
Keywords:  : Chebyshev polynomials, function gradient descent algorithm, loss function, onedayahead forecast 
JEL:  C14 C58 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:202115&r= 
By:  Jelena Bradic; Weijie Ji; Yuqian Zhang 
Abstract:  This paper proposes a confidence interval construction for heterogeneous treatment effects in the context of multistage experiments with $N$ samples and highdimensional, $d$, confounders. Our focus is on the case of $d\gg N$, but the results obtained also apply to lowdimensional cases. We showcase that the bias of regularized estimation, unavoidable in highdimensional covariate spaces, is mitigated with a simple doublerobust score. In this way, no additional bias removal is necessary, and we obtain root$N$ inference results while allowing multistage interdependency of the treatments and covariates. Memoryless property is also not assumed; treatment can possibly depend on all previous treatment assignments and all previous multistage confounders. Our results rely on certain sparsity assumptions of the underlying dependencies. We discover new product rate conditions necessary for robust inference with dynamic treatments. 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2110.04924&r= 
By:  Kyle Butts 
Abstract:  This paper formalizes a common approach for estimating effects of treatment at a specific location using geocoded microdata. This estimator compares units immediately next to treatment (an innerring) to units just slightly further away (an outerring). I introduce intuitive assumptions needed to identify the average treatment effect among the affected units and illustrates pitfalls that occur when these assumptions fail. Since one of these assumptions requires knowledge of exactly how far treatment effects are experienced, I propose a new method that relaxes this assumption and allows for nonparametric estimation using partitioningbased least squares developed in Cattaneo et. al. (2019). Since treatment effects typically decay/change over distance, this estimator improves analysis by estimating a treatment effect curve as a function of distance from treatment. This is contrast to the traditional method which, at best, identifies the average effect of treatment. To illustrate the advantages of this method, I show that Linden and Rockoff (2008) under estimate the effects of increased crime risk on home values closest to the treatment and overestimate how far the effects extend by selecting a treatment ring that is too wide. 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2110.10192&r= 
By:  Eiji Kurozumi; Anton Skrobotov 
Abstract:  In this study, we extend the threeregime bubble model of Pang et al. (2021) to allow the forth regime followed by the unit root process after recovery. We provide the asymptotic and finite sample justification of the consistency of the collapse date estimator in the tworegime AR(1) model. The consistency allows us to split the sample before and after the date of collapse and to consider the estimation of the date of exuberation and date of recovery separately. We have also found that the limiting behavior of the recovery date varies depending on the extent of explosiveness and recovering. 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2110.04500&r= 
By:  Caporina, Massimiliano; Costola, Michele 
Abstract:  Analysing causality among oil prices and, in general, among financial and economic variables is of central relevance in applied economics studies. The recent contribution of Lu et al. (2014) proposes a novel test for causality the DCCMGARCH Hong test. We show that the critical values of the test statistic must be evaluated through simulations, thereby challenging the evidence in papers adopting the DCCMGARCH Hong test. We also note that rolling Hong tests represent a more viable solution in the presence of shortlived causality periods. 
Keywords:  Granger Causality,Hong test,DCCGARCH,Oil market,COVID19 
JEL:  C10 C13 C32 C58 Q43 Q47 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:zbw:safewp:324&r= 
By:  Matias D. Cattaneo; Paul Cheung; Xinwei Ma; Yusufcan Masatlioglu 
Abstract:  We introduce an Attention Overload Model that captures the idea that alternatives compete for the decision maker's attention, and hence the attention frequency each alternative receives decreases as the choice problem becomes larger. Using this nonparametric restriction on the random attention formation, we show that a fruitful revealed preference theory can be developed, and provide testable implications on the observed choice behavior that can be used to partially identify the decision maker's preference. Furthermore, we provide novel partial identification results on the underlying attention frequency, thereby offering the first nonparametric identification result of (a feature of) the random attention formation mechanism in the literature. Building on our partial identification results, for both preferences and attention frequency, we develop econometric methods for estimation and inference. Importantly, our econometric procedures remain valid even in settings with large number of alternatives and choice problems, an important feature of the economic environment we consider. We also provide a software package in R implementing our empirical methods, and illustrate them in a simulation study. 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2110.10650&r= 
By:  Michael Keane (School of Economics); Timothy Neal (UNSW School of Economics) 
Abstract:  There is a long standing controversy over the magnitude of the Frisch labor supply elasticity. Macro economists using DSGE models often calibrate it to be large, while many micro data studies find it is small. Several papers attempt to reconcile the micro and macro results. We offer a new and simple explanation: Most micro studies estimate the Frisch using a 2SLS regression of hours changes on income changes. But available instruments are typically "weak." In that case, we show it is an inherent property of 2SLS that estimates of the Frisch will (spuriously) appear more precise when they are more shifted in the direction of the OLS bias, which is negative. As a result, Frisch elasticities near zero will (spuriously) appear to be precisely estimated, while large estimates will appear to be imprecise. This pattern makes it difficult for a 2SLS ttest to detect a true positive Frisch elasticity. We show how the use of a weak instrument robust hypothesis test, the AndersonRubin (AR) test, leads us to conclude the Frisch elasticity is large and signiï¬ cant in the NLSY97 data. In contrast, a conventional 2SLS ttest would lead us to conclude it is not significantly different from zero. Our application illustrates a fundamental problem with 2SLS ttests that arises quite generally, even with strong instruments. Thus, we argue the AR test should be widely adopted in lieu of the ttest. 
Keywords:  Frisch elasticity, labor supply, weak instruments, 2SLS, AndersonRubin test 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:swe:wpaper:202107b&r= 
By:  Méndez Civieta, Álvaro; Aguilera Morillo, María del Carmen; Lillo Rodríguez, Rosa Elvira 
Abstract:  Partial least squares (PLS) is a dimensionality reduction technique used as an alternative to ordinary least squares (OLS) in situations where the data is colinear or high dimensional. Both PLS and OLS provide mean based estimates, which are extremely sensitive to the presence of outliers or heavy tailed distributions. In contrast, quantile regression is an alternative to OLS that computes robust quantile based estimates. In this work, the multivariate PLS is extended to the quantile regression framework, obtaining a theoretical formulation of the problem and a robust dimensionality reduction technique that we call fast partial quantile regression (fPQR), that provides quantilebased estimates. An efficient implementation of fPQR is also derived, and its performance is studied through simulation experiments and the chemometrics well known biscuit dough dataset, a real high dimensional example. 
Keywords:  PartialLeastSquares; QuantileRegression; DimensionReduction; Outliers; Robust 
Date:  2021–10–18 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:33469&r= 
By:  Parley Ruogu Yang; Ryan Lucas; Camilla Schelpe 
Abstract:  We formally introduce a time series statistical learning method, called Adaptive Learning, capable of handling model selection, outofsample forecasting and interpretation in a noisy environment. Through simulation studies we demonstrate that the method can outperform traditional model selection techniques such as AIC and BIC in the presence of regimeswitching, as well as facilitating window size determination when the Data Generating Process is timevarying. Empirically, we use the method to forecast S&P 500 returns across multiple forecast horizons, employing information from the VIX Curve and the Yield Curve. We find that Adaptive Learning models are generally on par with, if not better than, the best of the parametric models a posteriori, evaluated in terms of MSE, while also outperforming under cross validation. We present a financial application of the learning results and an interpretation of the learning regime during the 2020 market crash. These studies can be extended in both a statistical direction and in terms of financial applications. 
Date:  2021–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2110.11156&r= 
By:  Härdle, Wolfgang; Klochkov, Yegor; Petukhina, Alla; Zhivotovskiy, Nikita 
Abstract:  Markowitz meanvariance portfolios with sample mean and covariance as input parameters feature numerous issues in practice. They perform poorly out of sample due to estimation error, they experience extreme weights together with high sen sitivity to change in input parameters. The heavytail characteristics of financial time series are in fact the cause for these erratic fluctuations of weights that conse quently create substantial transaction costs. In robustifying the weights we present a toolbox for stabilizing costs and weights for global minimum Markowitz portfolios. Utilizing a projected gradient descent (PGD) technique, we avoid the estimation and inversion of the covariance operator as a whole and concentrate on robust estimation of the gradient descent increment. Using modern tools of robust statistics we con struct a computationally efficient estimator with almost Gaussian properties based on medianofmeans uniformly over weights. This robustified Markowitz approach is confirmed by empirical studies on equity markets. We demonstrate that robustified portfolios reach higher riskadjusted performance and the lowest turnover compared to shrinkage based and constrained portfolios. 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:zbw:irtgdp:2021018&r= 
By:  Mingyang Li; Linlin Niu 
Abstract:  Motivating with two scenarios in which the government spending in China timely reacted to output shock within a quarter, this letter points out a downward bias in the estimation of Chinese government spending multiplier using the classical lag restriction for shock identification in a quarterly SVAR framework à la Blanchard and Perotti (2002). By relaxing the laglength restriction from one quarter to one month, we propose a mixedfrequency identification (MFI) strategy by taking the unexpected spending change in the first month of each quarter as an instrument. The estimation results show that the Chinese government significantly reacts to output shock countercyclically within a quarter, with the resulting government spending multiplier being 0.546 on impact and 1.849 at the maximum. A comparison study confirms that results based on the identification strategy of Blanchard and Perotti (2002) suffer severe downward bias in such a case. 
Keywords:  government spending multiplier; inside lag; mixedfrequency identification; SVAR model. 
JEL:  C32 C36 E23 E62 
Date:  2021–10–19 
URL:  http://d.repec.org/n?u=RePEc:wyi:wpaper:002594&r= 