
on Econometrics 
By:  Tanaka, Katsuto (Gakushuin University); Xiao, Weilin (Zhejiang University); Yu, Jun (School of Economics, Singapore Management University) 
Abstract:  Based on the least squares estimator, this paper proposes a novel method to test the sign of the persistence parameter in a panel fractional OrnsteinUhlenbeck process with a known Hurst parameter H. Depending on H ∈ (1/2, 1), H = 1/2, or H ∈ (0, 1/2), three test statistics are considered. In the null hypothesis the persistence parameter is zero. Based on a panel of continuous record of observations, the null asymptotic distributions are obtained when T is ﬁxed and N is assumed to go to inﬁnity, where T is the time span of the sample and N is the number of cross sections. The power function of the tests is obtained under the local alternative where the persistence parameter is close to zero in the order of 1/(T√N). The local power of the proposed test statistics is computed and compared with that of the maximumlikelihoodbased test. The hypothesis testing problem and the local power function are also considered when a panel of discretesampled observations is available under a sequential limit. 
Keywords:  Panel fractional OrnsteinUhlenbeck process; Least squares; Asymptotic distribution; Local alternative; Local power 
JEL:  C22 C23 
Date:  2020–02–25 
URL:  http://d.repec.org/n?u=RePEc:ris:smuesw:2020_006&r=all 
By:  Dimitris Korobilis (Department of Economics, University of Glasgow, UK; Rimini Centre for Economic Analysis) 
Abstract:  This paper proposes a new Bayesian sampling scheme for VAR inference using sign restrictions. We build on a factor model decomposition of the reducedform VAR disturbances, which are assumed to be driven by a few fundamental factors/shocks. The outcome is a computationally efficient algorithm that allows to jointly sample VAR parameters as well as decompositions of the covariance matrix satisfying desired sign restrictions. Using artificial and real data we show that the new algorithm works well and is multiple times more efficient than existing accept/reject algorithms for sign restrictions. 
Keywords:  highdimensional inference, Structural VAR, Markov chain Monte Carlo, set identification 
JEL:  C11 C13 C15 C22 C52 C53 C61 
Date:  2020–03 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:2009&r=all 
By:  Doko Tchatoka, Firmin; Wang, Wenjie 
Abstract:  Pretesting for exogeneity has become a routine in many empirical applications involving instrumental variables (IVs) to decide whether the ordinary least squares (OLS) or the twostage least squares (2SLS) method is appropriate. Guggenberger (2010) shows that the secondstage ttest– based on the outcome of a Durbin WuHausman type pretest for exogeneity in the firststage– has extreme size distortion with asymptotic size equal to 1 when the standard asymptotic critical values are used. In this paper, we first show that the standard residual bootstrap procedures (with either independent or dependent draws of disturbances) are not viable solutions to such extreme sizedistortion problem. Then, we propose a novel hybrid bootstrap approach, which combines the residualbased bootstrap along with an adjusted Bonferroni sizecorrection method. We establish uniform validity of this hybrid bootstrap in the sense that it yields a twostage test with correct asymptotic size. Monte Carlo simulations confirm our theoretical findings. In particular, our proposed hybrid method achieves remarkable power gains over the 2SLSbased ttest, especially when IVs are not very strong. 
Keywords:  DWH Pretest; Instrumental Variable; Asymptotic Size; Bootstrap; Bonferronibased Sizecorrection; Uniform Inference 
JEL:  C12 C13 C26 
Date:  2020–03–24 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:99243&r=all 
By:  Kamil Makie{\l}a; B{\l}a\.zej Mazur 
Abstract:  Our main contribution lies in formulation of a generalized, parametric model for stochastic frontier analysis (SFA) that nests virtually all forms used so far in the field and includes certain special cases that have not been considered so far. We use the general model framework for the purpose of formal testing and comparison of alternative specifications, which provides a way to deal with model uncertainty  the main issue of SFA that has not been resolved so far. SFA dates back to Aigner et al. (1977) and Meeusen and van den Broeck (1977), and relies upon the idea of compound error specification with at least two error terms, one representing the observation error and the other interpreted as some form of inefficiency. The models considered here are based on the generalized t distribution for the observation error and the generalized beta distribution of the second kind for the inefficiencyrelated term. Hence, it is possible to relax a number of various potentially restrictive assumptions embedded in models used so far. We also develop methods of Bayesian inference that are less restrictive (though more demanding in terms of computation time) compared to the ones used so far and demonstrate inference on the latent variables (i.e., objectspecific inefficiency terms, which are important for, e.g., policyoriented analyses). 
Date:  2020–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2003.07150&r=all 
By:  Sven Otto 
Abstract:  A unit root test is proposed for time series with a general nonlinear deterministic trend component. It is shown that asymptotically the pooled OLS estimator of overlapping blocks filters out any trend component that satisfies some Lipschitz condition. Under both fixed$b$ and small$b$ block asymptotics, the limiting distribution of the tstatistic for the unit root hypothesis is derived. Nuisance parameter corrections provide heteroskedasticityrobust tests, and serial correlation is accounted for by prewhitening. A Monte Carlo study that considers slowly varying trends yields both good size and improved power results for the proposed tests when compared to conventional unit root tests. 
Date:  2020–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2003.04066&r=all 
By:  Adrian Pagan; Tim Robinson 
Abstract:  We show that when a model has more shocks than observed variables the estimated filtered and smoothed shocks will be correlated. This is despite no correlation being present in the data generating process. Additionally the estimated shock innovations may be autocorrelated. These correlations limit the relevance of impulse responses, which assume uncorrelated shocks, for interpreting the data. Excess shocks occur frequently, e.g. in UnobservedComponent (UC) models, filters, including Hodrick Prescott (1997), and some Dynamic Stochastic General Equilibrium (DSGE) models. Using several UC models and an estimated DSGE model, Ireland (2011), we demonstrate that sizable correlations among the estimated shocks can result. 
Keywords:  Partial Information, Structural Shocks, Kalman Filter, Measurement Error, DSGE 
JEL:  E37 C51 C52 
Date:  2020–03 
URL:  http://d.repec.org/n?u=RePEc:een:camaaa:202028&r=all 
By:  Christian Bongiorno; Damien Challet 
Abstract:  Statistical inference of the dependence between objects often relies on covariance matrices. Unless the number of features (e.g. data points) is much larger than the number of objects, covariance matrix cleaning is necessary to reduce estimation noise. We propose a method that is robust yet flexible enough to account for fine details of the structure covariance matrix. Robustness comes from using a hierarchical ansatz and dependence averaging between clusters; flexibility comes from a bootstrap procedure. This method finds several possible hierarchical structures in DNA microarray gene expression data, and leads to lower realized risk in global minimum variance portfolios than current filtering methods when the number of data points is relatively small. 
Date:  2020–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2003.05807&r=all 
By:  Thomas Stringham 
Abstract:  Applied researchers are often interested in linking individuals between two datasets that lack unique identifiers. Accuracy and computational feasibility are a challenge, particularly when linking large datasets. We develop a Bayesian method for automated probabilistic record linkage and show it recovers 40% more true matches, holding accuracy constant, than comparable methods in a matching of Union Army recruitment data to the 1900 US Census for which expertlabelled true matches are known. Our approach, which builds on a recent stateoftheart Bayesian method, refines the modelling of comparison data, allowing disagreement probability parameters conditional on nonmatch status to be recordspecific. To make this refinement computationally feasible, we implement a Gibbs sampler that achieves significant improvement in speed over comparable recent implementations. We also generalize the notion of comparison data to allow for treatment of very common first names that spuriously produce exact matches in record pairs and show how to estimate true positive rate and positive predictive value when ground truth is unavailable. 
Date:  2020–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2003.04238&r=all 
By:  Alexander J. McNeil 
Abstract:  An approach to the modelling of financial return series using a class of uniformitypreserving transforms for uniform random variables is proposed. Vtransforms describe the relationship between quantiles of the return distribution and quantiles of the distribution of a predictable volatility proxy variable constructed as a function of the return. Vtransforms can be represented as copulas and permit the construction and estimation of models that combine arbitrary marginal distributions with linear or nonlinear time series models for the dynamics of the volatility proxy. The idea is illustrated using a transformed Gaussian ARMA process for volatility, yielding the class of VTARMA copula models. These can replicate many of the stylized facts of financial return series and facilitate the calculation of marginal and conditional characteristics of the model including quantile measures of risk. Estimation of models is carried out by adapting the exact maximum likelihood approach to the estimation of ARMA processes. 
Date:  2020–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2002.10135&r=all 
By:  Lutz Kilian; Xiaoqing Zhou 
Abstract:  Oil market VAR models have become the standard tool for understanding the evolution of the real price of oil and its impact in the macro economy. As this literature has expanded at a rapid pace, it has become increasingly difficult for mainstream economists to understand the differences between alternative oil market models, let alone the basis for the sometimes divergent conclusions reached in the literature. The purpose of this survey is to provide a guide to this literature. Our focus is on the econometric foundations of the analysis of oil market models with special attention to the identifying assumptions and methods of inference. We not only explain how the workhorse models in this literature have evolved, but also examine alternative oil market VAR models. We help the reader understand why the latter models sometimes generated unconventional, puzzling or erroneous conclusions. Finally, we discuss the construction of extraneous measures of oil demand and oil supply shocks that have been used as external or internal instruments for VAR models. 
Keywords:  Oil supply elasticity; oil demand elasticity; IV estimation; structural VAR 
JEL:  Q43 Q41 C36 C52 
Date:  2020–03–06 
URL:  http://d.repec.org/n?u=RePEc:fip:feddwp:87676&r=all 
By:  Christian Bongiorno (MICS  Mathématiques et Informatique pour la Complexité et les Systèmes  CentraleSupélec); Damien Challet (MICS  Mathématiques et Informatique pour la Complexité et les Systèmes  CentraleSupélec) 
Abstract:  Cleaning covariance matrices is a highly nontrivial problem, yet of central importance in the statistical inference of dependence between objects. We propose here a probabilistic hierarchical clustering method, named Bootstrapped Average Hierarchical Clustering (BAHC) that is particularly effective in the highdimensional case, i.e., when there are more objects than features. When applied to DNA microarray, our method yields distinct hierarchical structures that cannot be accounted for by usual hierarchical clustering. We then use global minimumvariance risk management to test our method and find that BAHC leads to significantly smaller realized risk compared to stateoftheart linear and nonlinear filtering methods in the highdimensional case. Spectral decomposition shows that BAHC better captures the persistence of the dependence structure between asset price returns in the calibration and the test periods. 
Date:  2020–03–12 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal02506848&r=all 
By:  Stanislav Anatolyev; Mikkel S{\o}lvsten 
Abstract:  We propose a hypothesis test that allows for many tested restrictions in a heteroskedastic linear regression model. The test compares the conventional Fstatistic to a critical value that corrects for many restrictions and conditional heteroskedasticity. The correction utilizes leaveoneout estimation to recenter the conventional critical value and leavethreeout estimation to rescale it. Large sample properties of the test are established in an asymptotic framework where the number of tested restrictions may grow in proportion to the number of observations. We show that the test is asymptotically valid and has nontrivial asymptotic power against the same local alternatives as the exact F test when the latter is valid. Simulations corroborate the relevance of these theoretical findings and suggest excellent size control in moderately small samples also under strong heteroskedasticity. 
Date:  2020–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2003.07320&r=all 
By:  Lucchetti, Riccardo; Venetis, Ioannis A. 
Abstract:  The authors replicate and extend the Monte Carlo experiment presented in Doz et al. (2012) on alternative (timedomain based) methods for extracting dynamic factors from large datasets; they employ open source software and consider a larger number of replications and a wider set of scenarios. Their narrow sense replication exercise fully confirms the results in the original article. As for their extended replication experiment, the authors examine the relative performance of competing estimators under a wider array of cases, including richer dynamics, and find that maximum likelihood (ML) is often the dominant method; moreover, the persistence characteristics of the observable series play a crucial role and correct specification of the underlying dynamics is of paramount importance. 
Keywords:  dynamic factor models,EM algorithm,Kalman filter,principal components 
JEL:  C15 C32 C55 C87 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:zbw:ifwedp:20205&r=all 
By:  Patrick Chang; Etienne Pienaar; Tim Gebbie 
Abstract:  We implement and test kernel averaging NonUniform FastFourier Transform (NUFFT) methods to enhance the performance of correlation and covariance estimation on asynchronously sampled eventdata using the MalliavinMancino Fourier estimator. The methods are benchmarked for Dirichlet and Fej\'{e}r Fourier basis kernels. We consider test cases formed from Geometric Brownian motions to replicate synchronous and asynchronous data for benchmarking purposes. We consider three standard averaging kernels to convolve the eventdata for synchronisation via oversampling for use with the Fast Fourier Transform (FFT): the Gaussian kernel, the KaiserBessel kernel, and the exponential of semicircle kernel. First, this allows us to demonstrate the performance of the estimator with different combinations of basis kernels and averaging kernels. Second, we investigate and compare the impact of the averaging scales explicit in each averaging kernel and its relationship between the timescale averaging implicit in the MalliavinMancino estimator. Third, we demonstrate the relationship between timescale averaging based on the number of Fourier coefficients used in the estimator to a theoretical model of the Epps effect. We briefly demonstrate the methods on TradeandQuote (TAQ) data from the Johannesburg Stock Exchange to make an initial visualisation of the correlation dynamics for various timescales under market microstructure. 
Date:  2020–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2003.02842&r=all 
By:  W. Robert Reed (University of Canterbury) 
Abstract:  Metaanalyses in economics, business, and the social sciences commonly use partial correlation coefficients (PCCs) when the original estimated effects cannot be combined. This can occur, for example, when the primary studies use different measures for the dependent and independent variables, even though they are all concerned with estimating the same conceptual effect. This note demonstrates that analyses based on PCCs can produce different results than those based on the original, estimated effects. This can affect conclusions about the overall mean effect, the factors responsible for differences in estimated effects across studies, and the existence of publication selection bias. I first derive the theoretical relationship between Fixed Effects/Weighted Least Squares estimates of the overall mean effect when using the original estimated effects and their PCC transformations. I then provide two empirical examples from recently published studies. The first empirical analysis is an example where the use of PCCs does not change the main conclusions. The second analysis is an example where the conclusions are substantially impacted. I explain why the use of PCCs had different effects in the two examples. 
Keywords:  Metaanalysis, Publication bias, FATPET, Metaregression analysis, Partial correlation coefficients 
JEL:  B41 C15 C18 
Date:  2020–03–01 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:20/08&r=all 
By:  Liu,Jinjing 
Abstract:  The codependence between assets tends to increase when the market declines. This paper develops a correlation measure focusing on market declines using the expected shortfall (ES), referred to as the ESimplied correlation, to improve the existing value at risk (VaR)implied correlation. Simulations which define periodbyperiod true correlations show that the ESimplied correlation is much closer to true correlations than is the VaRimplied correlation with respect to average bias and rootmeansquare error. More importantly, this paper develops a series of test statistics to measure and test correlation asymmetries, as well as to evaluate the impact of weights on the VaRimplied correlation and the ESimplied correlation. The test statistics indicate that the linear correlation significantly underestimates correlations between the US and the other G7 countries during market downturns, and the choice of weights does not have significant impact on the VaRimplied correlation or the ESimplied correlation. 
Keywords:  Capital Markets and Capital Flows,Securities Markets Policy&Regulation,Capital Flows 
Date:  2019–01–17 
URL:  http://d.repec.org/n?u=RePEc:wbk:wbrwps:8709&r=all 
By:  Taurai Muvunza 
Abstract:  We investigate the behaviour of cryptocurrencies' return data. Using return data for bitcoin, ethereum and ripple which account for over 70% of the cyrptocurrency market, we demonstrate that $\alpha$stable distribution models highly speculative cryptocurrencies more robustly compared to other heavy tailed distributions that are used in financial econometrics. We find that the Maximum Likelihood Method proposed by DuMouchel (1971) produces estimates that fit the cryptocurrency return data much better than the quantile based approach of McCulloch (1986) and sample characteristic method by Koutrouvelis (1980). The empirical results show that the leptokurtic feature presented in cryptocurrencies' return data can be captured by an ${\alpha}$stable distribution. This papers covers predominant literature in cryptocurrencies and stable distributions. 
Date:  2020–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2002.09881&r=all 
By:  Roth, Markus 
Abstract:  A structural Bayesian vector autoregression model predicts that  when accompanied by a decline in consumer confidence  a onepercent decrease in house prices is associated with a contraction of economic activity by 0.2 to 1.2 percent after one year. Results point to important secondround effects and additional exercises highlight the amplifying role of (i ) the mortgage rate and (ii ) consumers' expectations. A novel econometric approach exploits information available from the cross section. Shrinkage towards a crosscountry average model helps to compensate for small country samples and reduces estimation uncertainty. As a byproduct, the method delivers measures of crosscountry heterogeneity. 
Keywords:  Bayesian model averaging,dummy observations,house price shocks 
JEL:  C11 C33 E44 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdps:062020&r=all 
By:  Magnolﬁ, Lorenzo (University of WisconsinMadison); Roncoroni, Camilla (University of Warwick) 
Abstract:  We propose a method to estimate static discrete games with weak assumptions on the information available to players. We do not fully specify the information structure of the game, but allow instead for all information structures consistent with players knowing their own payoﬀs and the distribution of opponents’ payoﬀs. To make this approach tractable we adopt a weaker solution concept: Bayes Correlated Equilibrium (BCE), developed by Bergemann and Morris (2016). We characterize the sharp identiﬁed set under the assumption of BCE and no assumptions on equilibrium selection, and ﬁnd that in simple games with modest variation in observable covariates identiﬁed sets are narrow enough to be informative. In an application, we estimate a model of entry in the Italian supermarket industry and quantify the eﬀect of large malls on local grocery stores. Parameter estimates and counterfactual predictions diﬀer from those obtained under the restrictive assumption of complete information. 
Keywords:  Estimation of games ; informational robustness ; Bayes Correlated Equilibrium ; entry models ; partial identiﬁcation ; supermarket industry JEL codes: C57 ; L10 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:wrk:warwec:1247&r=all 