
on Econometrics 
By:  Rachida Ouysse (School of Economics, UNSW Business School, UNSW) 
Abstract:  Principal components (PC) are fundamentally feasible for the estimation of large factor models because consistency can be achieved for any path of the panel dimensions. The PC method is however inefficient under crosssectional dependence with unknown structure. The approximate factor model of Chamberlain and Rothschild [1983] imposes a bound on the amount of dependence in the error term. This article proposes a constrained principal components (CnPC) estimator that incorporates this restriction as external information in the PC analysis of the data. This estimator is computationally tractable. It doesn't require estimating large covariance matrices, and is obtained as PC of a regularized form of the data covariance matrix. The paper develops a convergence rate for the factor estimates and establishes asymptotic normality. In a Monte Carlo study, we find that the CnPC estimators have good small sample properties in terms of estimation and forecasting performances when compared to the regular PC and to the generalized PC method (Choi [2012]). 
Keywords:  High dimensionality, unknown factors, principal components, crosssectional correlation, shrinkage regression, outofsample forecasting 
JEL:  C11 C13 C33 C53 C55 
Date:  2017–04 
URL:  http://d.repec.org/n?u=RePEc:swe:wpaper:201712&r=ecm 
By:  Skrobotov Anton (RANEPA); Cavaliere Giuseppe (Department of Statistical Sciences, University of Bologna); Taylor Robert (Essex Business School, University of Essex) 
Abstract:  This paper investigates the behaviour of the wellknown HEGY (Hylleberg, Engle, Granger and Yoo, 1990, Journal of Econometrics, vol.44, pp.215238) regressionbased seasonal unit root tests in cases where the driving shocks are allowed to display periodic nonstationary volatility and conditional heteroskedasticity. Our set up allows for periodic heteroskedasticity, nonstationary volatility and (seasonal) GARCH as special cases. We show that the limiting null distributions of the HEGY tests depend, in general, on nuisance parameters which derive from the underlying volatility process. Monte Carlo simulations show that the standard HEGY tests can be substantially oversized in the presence of such effects. As a consequence, we propose bootstrap implementations of the HEGY tests, based around a seasonal block wild bootstrap principle. This is shown to deliver asymptotically pivotal inference under our general conditions on the shocks. Simulation evidence is presented which suggests that our proposed bootstrap tests perform well in practice, largely correcting the size problems seen with the standard HEGY tests even under extreme patterns of heteroskedasticity, yet not losing finite sample relative to the standard HEGY tests. 
Keywords:  seasonal unit roots, (periodic) nonstationary volatility, conditional heteroskedasticity, wild bootstrap 
JEL:  C12 C22 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:gai:wpaper:wpaper2016269&r=ecm 
By:  Ferman, Bruno; Pinto, Cristine 
Abstract:  The synthetic control (SC) method has been recently proposed as an alternative to estimate treatment effects in comparative case studies. An important feature of the SC method is the inferential procedures based on placebo studies, suggested in Abadie et al. (2010). In this paper, we evaluate the statistical properties of these inferential techniques. We first show that the graphical analysis with placebos can be misleading, as placebo runs with lower expected squared prediction errors would still be considered in the analysis. Then we show that a test based on the the post/preintervention mean squared prediction error, as suggested in Abadie et al. (2010), ameliorates this problem. However, we show that such test can still have some size distortions, even if we consider a case in which the test statistic has the same marginal distribution for all placebo runs. Finally, we show that the fact that the SC weights are estimated can lead to important additional size distortions. 
Keywords:  synthetic control, differenceindifferences; linear factor model, inference, permutation test 
JEL:  C12 C13 C21 C23 
Date:  2017–04–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:78079&r=ecm 
By:  Christophe Boucher; Gilles de Truchis; Elena Dumitrescu; Sessi Tokpavi 
Abstract:  This paper proposes a simple and parsimonious semiparametric testing procedure for variance transmission. Our test focuses on conditional extreme values of the unobserved process of integrated variance since they are of utmost concern for policy makers due to their sudden and destabilizing effects. The test statistic is based on realized measures of variance and has a convenient asymptotic x2 distribution under the null hypothesis of no Granger causality, which is free of estimation risk. Extensive Monte Carlo simulations show that the test has good small sample size and power properties. An extension to the case of spillovers in quadratic variation is also developed. An empirical application on extreme variance transmission from US to EU equity markets is further proposed. We find that the test performs very well in identifying periods of significant causality in extreme variance, that are subsequently found to be correlated with changes in US monetary policy. 
Keywords:  Extreme volatility transmission, Granger causality, Integrated variance, Realized variance, Semiparametric test, Financial contagion. 
JEL:  C12 C32 C58 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:drm:wpaper:201720&r=ecm 
By:  Wenger, Kai; Leschinski, Christian; Sibbertsen, Philipp 
Abstract:  We propose a simple test on structural change in longrange dependent time series. It is based on the idea that the test statistic of the standard CUSUM test retains its asymptotic distribution if it is applied to fractionally differenced data. We prove that our approach is asymptotically valid if the memory is estimated consistently under the null hypothesis. Therefore, the wellknown CUSUM test can be used on the differenced data without any further modification. In a simulation study, we compare our test with a CUSUM test on structural change that is specifically constructed for longmemory time series and show that our approach performs well. 
Keywords:  Fractional Integration; Structural Breaks; Long Memory 
JEL:  C12 C22 
Date:  2017–04 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp592&r=ecm 
By:  Timo Dimitriadis; Sebastian Bayer 
Abstract:  We introduce a novel regression framework which simultaneously models the quantile and the Expected Shortfall of a response variable given a set of covariates. The foundation for this joint regression is a recent result by Fissler and Ziegel (2016), who show that the quantile and the ES are jointly elicitable. This joint elicitability allows for M and Zestimation of the joint regression parameters. Such a parameter estimation is not possible for an Expected Shortfall regression alone as Expected Shortfall is not elicitable. We show consistency and asymptotic normality for the M and Zestimator under standard regularity conditions. The loss function used for the Mestimation depends on two specification functions, whose choices affect the properties of the resulting estimators. In an extensive simulation study, we verify the asymptotic properties and analyze the small sample behavior of the Mestimator under different choices for the specification functions. This joint regression framework allows for various applications including estimating, forecasting and backtesting Expected Shortfall, which is particularly relevant in light of the upcoming introduction of Expected Shortfall in the Basel Accords. 
Date:  2017–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1704.02213&r=ecm 
By:  Isabel NarbónPerpiñá (Department of Economics, Universitat Jaume I, Castellón, Spain); Mª Teresa BalaguerColl (Department of Accounting and Finance, Universitat Jaume I, Castellón, Spain); Marko Petrovic (LEE and Department of Economics, Universitat Jaume I, Castellón, Spain); Emili TortosaAusina (IVIE and Department of Economics, Universitat Jaume I, Castellón, Spain) 
Abstract:  We analyse overall cost efficiency in Spanish local governments during the crisis period (2008–2013). To this end, we first consider some of the most popular methods to evaluate local government efficiency, DEA (Data Envelopment Analysis) and FDH (Free Disposal Hull), as well as recent proposals, namely the orderm partial frontier and the nonparametric estimator proposed by Kneip, Simar and Wilson (2008), which are also nonparametric approaches. Second, we compare the methodologies used to measure efficiency. In contrast to previous literature, which has regularly compared techniques and made proposals for alternative methodologies, we follow recent proposals (Badunenko et al., 2012) with the aim of comparing the four methods and choosing the one which performs best with our particular dataset, that is, the most appropriate method for measuring local government cost efficiency in Spain. We carry out the experiment via Monte Carlo simulations and discuss the relative performance of the efficiency scores under various scenarios. Our results suggest that there is no one approach suitable for all efficiency analysis. We find that for our sample of 1,574 Spanish local governments, the average cost efficiency would have been between 0.54 and 0.77 during the period 2008–2013, suggesting that Spanish local governments could have achieved the same level of local outputs with about 23% to 36% fewer resources. 
Keywords:  OR in government, efficiency, local government, nonparametric frontiers 
JEL:  C14 C15 H70 R15 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:jau:wpaper:2017/06&r=ecm 
By:  Tom Boot (University of Groningen); Andreas Pick (Erasmus University Rotterdam, De Nederlandsche Bank and CESifo Institute) 
Abstract:  We propose a near optimal test for structural breaks of unknown timing when the purpose of the analysis is to obtain accurate forecasts under square error loss. A biasvariance tradeoff exists under square forecast error loss, which implies that small structural breaks should be ignored. We study critical break sizes, assess the relevance of the break location, and provide a test to determine whether modeling a break will improve forecast accuracy. Asymptotic critical values and near optimality properties are established allowing for a break under the null, where the critical break size varies with the break location. The results are extended to a class of shrinkage forecasts with our test statistic as shrinkage constant. Empirical results on a large number of macroeconomic time series show that structural breaks that are relevant for forecasting occur much less frequently than indicated by existing tests. 
Keywords:  structural break test, forecasting, squared error loss 
JEL:  C12 C53 
Date:  2017–04–18 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20170039&r=ecm 
By:  Marie Kratz (MAP5  MAP5  Mathématiques Appliquées à Paris 5  CNRS  Centre National de la Recherche Scientifique  Institut National des Sciences Mathématiques et de leurs Interactions  UPD5  Université Paris Descartes  Paris 5); Yen Lok (Heriot Watt University); Alexander Mcneil (University of York [York]) 
Abstract:  Under the Fundamental Review of the Trading Book (FRTB) capital charges for the trading book are based on the coherent expected shortfall (ES) risk measure, which show greater sensitivity to tail risk. In this paper it is argued that backtesting of expected shortfallor the trading book model from which it is calculatedcan be based on a simultaneous multinomial test of valueatrisk (VaR) exceptions at different levels, an idea supported by an approximation of ES in terms of multiple quantiles of a distribution proposed in Emmer et al. (2015). By comparing Pearson, Nass and likelihoodratio tests (LRTs) for different numbers of VaR levels N it is shown in a series of simulation experiments that multinomial tests with N ≥ 4 are much more powerful at detecting misspecifications of trading book loss models than standard binomial exception tests corresponding to the case N = 1. Each test has its merits: Pearson offers simplicity; Nass is robust in its size properties to the choice of N ; the LRT is very powerful though slightly oversized in small samples and more computationally burdensome. A trafficlight system for trading book models based on the multinomial test is proposed and the recommended procedure is applied to a realdata example spanning the 2008 financial crisis. 
Keywords:  multinomial distribution,Nass test,Pearson test,risk management,risk measure,statistical test,tail of distribution,backtesting,banking regulation,coherence,elicitability,expected shortfall,heavy tail,likelihood ratio test,valueatrisk 
Date:  2016–11 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal01424279&r=ecm 
By:  M. Hashem Pesaran; Takashi Yamagata 
Abstract:  This paper proposes a novel test of zero pricing errors for the linear factor pricing model when the number of securities, N, can be large relative to the time dimension, T, of the return series. The test is based on Student t tests of individual securities and has a number of advantages over the existing standardised Wald type tests. It allows for nonGaussianity and general forms of weakly cross correlated errors. It does not require estimation of an invertible error covariance matrix, it is much faster to implement, and is valid even if N is much larger than T. Monte Carlo evidence shows that the proposed test performs remarkably well even when T = 60 and N = 5,000. The test is applied to monthly returns on securities in the S&P 500 at the end of each month in real time, using rolling windows of size 60. Statistically significant evidence against SharpeLintner CAPM and FamaFrench three factor models are found mainly during the recent financial crisis. Also we find a significant negative correlation between a twelvemonths moving average pvalues of the test and excess returns of long/short equity strategies (relative to the return on S&P 500) over the period November 1994 to June 2015, suggesting that abnormal profits are earned during episodes of market inefficiencies. 
Keywords:  CAPM, Testing for alpha, Weak and spatial error crosssectional dependence, S&P 500 securities, Long/short equity strategy. 
JEL:  C12 C15 C23 G11 G12 
Date:  2017–04 
URL:  http://d.repec.org/n?u=RePEc:yor:yorken:17/04&r=ecm 