
on Econometrics 
By:  Eric Gautier (CREST, ENSAE); Alexandre Tsybakov (CREST, ENSAE) 
Abstract:  We propose a new method of estimation in highdimensional linear regression model. It allows for very weak distributional assumptions including heteroscedasticity, and does not require the knowledge of the variance of random errors. The method is based on linear programming only, so that its numerical implementation is faster than for previously known techniques using conic programs, and it allows one to deal with higher dimensional models. We provide upper bounds for estimation and prediction errors of the proposed estimator showing that it achieves the same rate as in the more restrictive situation of fixed design and i.i.d. Gaussian errors with known variance. Following Gautier and Tsybakov (2011), we obtain the results under weaker sensitivity assumptions than the restricted eigenvalue or assimilated conditions. 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1303.7092&r=ecm 
By:  Christophe Ley; Thomas Verdebout 
Keywords:  circular statistics; fisher information singularity; Rayleigh test for uniformity; skewed distributions; tests for symmetry 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/142644&r=ecm 
By:  Anne Neumann; Maria Nieswand; Torben Schubert 
Abstract:  Nonparametric efficiency analysis has become a widely applied technique to support industrial benchmarking as well as a variety of incentive based regulation policies. In practice such exercises are often plagued by incomplete knowledge about the correct specifications of inputs and outputs. Simar and Wilson (2001) and Schubert and Simar (2011) propose restriction tests to support such specification decisions for crosssection data. However, the typical oligopolized market structure pertinent to regulation contexts often leads to low numbers of crosssection observations, rendering reliable estimation based on these tests practically unfeasible. This smallsample problem could often be avoided with the use of panel data, which would in any case require an extension of the crosssection restriction tests to handle panel data. In this paper we derive these tests. We prove the consistency of the proposed method and apply it to a sample of US natural gas transmission companies in 2003 through 2007. We find that the total quantity of gas delivered and gas delivered in peak periods measure essentially the same output. Therefore only one needs to be included. We also show that the length of mains as a measure of transportation service is nonredundant and therefore must be included. 
Keywords:  Benchmarking models, network industries, nonparametric efficiency estimation, data envelopment analysis, testing restrictions, subsampling, Bootstrap 
JEL:  C14 L51 L95 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1283&r=ecm 
By:  Massimiliano Caporin; Michael McAleer (University of Canterbury) 
Abstract:  The purpose of the paper is to discuss ten things potential users should know about the limits of the Dynamic Conditional Correlation (DCC) representation for estimating and forecasting timevarying conditional correlations. The reasons given for caution about the use of DCC include the following: DCC represents the dynamic conditional covariances of the standardized residuals, and hence does not yield dynamic conditional correlations; DCC is stated rather than derived; DCC has no moments; DCC does not have testable regularity conditions; DCC yields inconsistent two step estimators; DCC has no asymptotic properties; DCC is not a special case of GARCC, which has testable regularity conditions and standard asymptotic properties; DCC is not dynamic empirically as the effect of news is typically extremely small; DCC cannot be distinguished empirically from diagonal BEKK in small systems; and DCC may be a useful filter or a diagnostic check, but it is not a model. 
Keywords:  DCC; BEKK; GARCC; Stated representation; Derived model; Conditional covariances; Conditional correlations; Regularity conditions; Moments; Two step estimators; Assumed properties; Asymptotic properties; Filter; Diagnostic check 
JEL:  C18 C32 C58 G17 
Date:  2013–03–19 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:13/16&r=ecm 
By:  Yingying Li; Zhiyuan Zhang; Xinghua Zheng 
Abstract:  In this article we consider the volatility inference in the presence of both market microstructure noise and endogenous time. Estimators of the integrated volatility in such a setting are proposed, and their asymptotic properties are studied. Our proposed estimator is compared with the existing popular volatility estimators via numerical studies. The results show that our estimator can have substantially better performance when time endogeneity exists. 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1303.5809&r=ecm 
By:  Yong Bao (Department of Economics, Purdue University); Aman Ullah (Department of Economics, University of California,); Yun Wang (School of International Trade and Economics, University of International Business and Economics); Jun Yu (Sim Kee Boon Institute for Financial Economics, School of Economics and Lee Kong Chian School of Business, Singapore Management University) 
Abstract:  This paper develops the approximate finitesample bias of the ordinary least squares or quasi max imum likelihood estimator of the mean reversion parameter in continuoustime Levy processes. For the special case of Gaussian processes, our results reduce to those of Tang and Chen (2009) (when the longrun mean is unknown) and Yu (2012) (when the longrun mean is known). Simulations show that in general the approximate bias works well in capturing the true bias of the mean reversion estimator under difference scenarios. However, when the time span is small and the mean reversion parameter is approaching its lower bound, we nd it more difficult to approximate well the finitesample bias. 
JEL:  C10 C22 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:siu:wpaper:022013&r=ecm 
By:  Becheri, I.G.; Drost, F.C.; Akker, R. van den (Tilburg University, Center for Economic Research) 
Abstract:  Abstract This paper considers optimal unit root tests for a Gaussian crosssectionally independent heterogeneous panel with incidental intercepts and heterogeneous alternatives generated by random perturbations. We derive the (asymptotic and local) power envelope for two models: an auxiliary model where both the panel units and the random perturbations are observed, and the second one, the model of main interest, for which only the panel units are observed. We show that both models are Locally Asymptotically Normal (LAN). It turns out that there is an information loss: the power envelope for the auxiliary model is above the envelope for the model of main interest. Equality only holds if the alternatives are homogeneous. Our results exactly identify in which setting the unit root test of Moon, Perron, and Phillips (2007) is asymptotically UMP and, in fact, they show it is not possible to exploit possible heterogeneity in the alternatives, confirming a conjecture of Breitung and Pesaran (2008). Moreover, we propose a new asymptotically optimal test and we extend the results to a model allowing for crosssectional dependence. 
Keywords:  panel unit root test;Local Asymptotic Normality;limit experiment;asymptotic power envelope;information loss 
JEL:  C22 C23 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:2013017&r=ecm 
By:  Kabatek, J. (Tilburg University, Center for Economic Research) 
Abstract:  The ExpectationMaximization (EM) algorithm is a wellestablished estimation procedure which is used in many domains of econometric analysis. Recent application in a discrete choice framework (Train, 2008) facilitated estimation of latent class models allowing for very exible treatment of unobserved heterogeneity. The high exibility of these models is however counterweighted by often excessively long computation times, due to the iterative nature of the EM algorithm. This paper proposes a simple adjustment to the estimation procedure which proves to achieve substantial gains in terms of convergence speed without compromising any of the advantages of the original routine. The enhanced algorithm caps the number of iterations computed by the inner EM loop near its minimum, thereby avoiding optimization over suboptimally populated classes. Performance of the algorithm is assessed on a series of simulations, with the adjusted algorithm being 35 times faster than the original routine. 
Keywords:  EM algorithm;discrete choice models;latent class models 
JEL:  C14 C63 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:2013019&r=ecm 
By:  Marc Hallin; Marco Lippi 
Abstract:  Highdimensional time series may well be the most common type of dataset in the socalled“big data” revolution, and have entered current practice in many areas, includingmeteorology, genomics, chemometrics, connectomics, complex physics simulations, biologicaland environmental research, finance and econometrics. The analysis of such datasetsposes significant challenges, both from a statistical as from a numerical point of view. Themost successful procedures so far have been based on dimension reduction techniques and,more particularly, on highdimensional factor models. Those models have been developed,essentially, within time series econometrics, and deserve being better known in other areas.In this paper, we provide an original timedomain presentation of the methodologicalfoundations of those models (dynamic factor models usually are described via a spectralapproach), contrasting such concepts as commonality and idiosyncrasy, factors and commonshocks, dynamic and static principal components. That timedomain approach emphasizesthe fact that, contrary to the static factor models favored by practitioners, the socalled generaldynamic factor model essentially does not impose any constraints on the datageneratingprocess, but follows from a general representation result. 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/142428&r=ecm 
By:  Eran Raviv (Erasmus University Rotterdam) 
Abstract:  When the yield curve is modelled using an affine factor model, residuals may still contain relevant information and do not adhere to the familiar white noise assumption. This paper proposes a pragmatic way to improve out of sample performance for yield curve forecasting. The proposed adjustment is illustrated via a pseudo outofsample forecasting exercise implementing the widely used Dynamic Nelson Siegel model. Large improvement in forecasting performance is achieved throughout the curve for different forecasting horizons. Results are robust to different time periods, as well as to different model specifications. 
Keywords:  Yield curve; Nelson Siegel; Time varying loadings; Factor models 
JEL:  E43 E47 G17 
Date:  2013–03–07 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20130041&r=ecm 
By:  Markku Lanne; Jani Luoto 
Abstract:  We propose a noncausal autoregressive model with timevarying parameters, and apply it to U.S. postwar inflation. The model .fits the data well, and the results suggest that inflation persistence follows from future expectations. Persistence has declined in the early 1980.s and slightly increased again in the late 1990.s. Estimates of the new Keynesian Phillips curve indicate that current inflation also depends on past inflation although future expectations dominate. The implied trend inflation estimate evolves smoothly and is well aligned with survey expectations. There is evidence in favor of the variation of trend inflation following from the underlying marginal cost that drives inflation. 
JEL:  C22 C51 C53 E31 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1285&r=ecm 