
on Econometrics 
By:  Tschernig, Rolf; Weber, Enzo; Weigand, Roland 
Abstract:  Fractionally integrated vector autoregressive models allow to capture persistence in time series data in a very flexible way. Additional flexibility for the short memory properties of the model can be attained by using the fractional lag perator of Johansen (2008) in the vector autoregressive polynomial. However, it also makes maximum likelihood estimation more diffcult. In this paper we first identify parameter settings for univariate and bivariate models that suffer from poor identification in finite samples and may therefore lead to estimation problems. Second, we propose to investigate the extent of poor identification by using expected loglikelihoods and variations thereof which are faster to simulate than multivariate finite sample distributions of parameter estimates. Third, we provide a line of reasoning that explains the finding from several univariate and bivariate simulation examples that the twostep estimator suggested by Tschernig, Weber, and Weigand (2010) can be more robust with respect to estimating the deterministic components than the maximum likelihood estimator. 
Keywords:  fractional integration; long memory; maximum likelihood estimation; fractional lag operator 
JEL:  C32 C51 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:bay:rdwiwi:27269&r=ecm 
By:  Timothy B. Armstrong (Cowles Foundation, Yale University); Hock Peng Chan (National University of Singapore) 
Abstract:  This paper considers inference for conditional moment inequality models using a multiscale statistic. We derive the asymptotic distribution of this test statistic and use the result to propose feasible critical values that have a simple analytic formula. We also propose critical values based on a modified bootstrap procedure and prove their asymptotic validity. The asymptotic distribution is extreme value, and the proof uses new techniques to overcome several technical obstacles. We provide power results that show that our test detects local alternatives that approach the identified set at the best possible rate under a set of conditions that hold generically in the set identified case in a broad class of models, and that our test is adaptive to the smoothness properties of the data generating process. Our results also have implications for the use of moment selection procedures in this setting. We provide a monte carlo study and an empirical illustration to inference in a regression model with endogenously censored and missing data. 
Keywords:  Moment inequalities, Set inference, Adaptive inference 
JEL:  C01 C14 C34 
Date:  2013–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1885&r=ecm 
By:  Olivier Ledoit; Michael Wolf 
Abstract:  Covariance matrix estimation and principal component analysis (PCA) are two cornerstones of multivariate analysis. Classic textbook solutions perform poorly when the dimension of the data is of a magnitude similar to the sample size, or even larger. In such settings, there is a common remedy for both statistical problems: nonlinear shrinkage of the eigenvalues of the sample covariance matrix. The optimal nonlinear shrinkage formula depends on unknown population quantities and is thus not available. It is, however, possible to consistently estimate an oracle nonlinear shrinkage, which is motivated on asymptotic grounds. A key tool to this end is consistent estimation of the set of eigenvalues of the population covariance matrix (also known as spectrum), an interesting and challenging problem in its own right. Extensive Monte Carlo simulations demonstrate that our methods have desirable finitesample properties and outperform previous proposals. 
Keywords:  Largedimensional asymptotics, covariance matrix eigenvalues, nonlinear shrinkage, principal component analysis 
JEL:  C13 
Date:  2013–01 
URL:  http://d.repec.org/n?u=RePEc:zur:econwp:105&r=ecm 
By:  Marc Hallin; Marcelo Moreira J.; Alexei Onatski 
Abstract:  This paper considers a linear panel data model with reduced rank regressors and interactive fixed effects. The leading example is a factor model where some of the factors are observed, some others not. Invariance considerations yield a maximal invariant statistic whose density does not depend on incidental parameters. It is natural to consider a likelihood ratio test based on the maximal invariant statistic. Its density can be found by using as a prior the unique invariant distribution for the incidental parameters. That invariant distribution is least favorable and leads to minimax optimality properties. Combining the invariant distribution with a prior for the remaining parameters gives a class of admissible tests. A particular choice of distribution yields the spiked covariance model of Johnstone (2001). Numerical simulations suggest that the maximal invariant likelihood ratio test outperforms the standard likelihood ratio test. Tests which are not invariant to data transformations (i) are uniquely represented as randomized tests of the maximal invariant statistic and (ii) do not solve the incidental parameter problem. 
Keywords:  panel data models; factor model; incidental parameters; invariance; integrated likelihood; minimax; likelihood ratio test 
JEL:  C12 C44 
Date:  2013–01 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/137736&r=ecm 
By:  Giuseppe Cavaliere (Università di Bologna); Iliyan Georgiev (Universidade Nova de Lisboa) 
Abstract:  We consider estimation and testing infiniteorder autoregressive models with a (near) unit root and infinitevariance innovations. We study the asymptotic properties of estimators obtained by dummying out ?large?innovations, i.e., exceeding a given threshold. These estimators reflect the common practice of dealing with large residuals by including impulse dummies in the estimated regression. Iterative versions of the dummyvariable estimator are also discussed. We provide conditions on the preliminary parameter estimator and on the threshold which ensure that (i) the dummybased estimator is consistent at higher rates than the OLS estimator, (ii) an asymptotically normal test statistic for the unit root hypothesis can be derived, and (iii) order of magnitude gains of local power are obtained. 
Keywords:  Autoregressive processes; Infinite variance; Dummy variables Processi autoregressivi; Varianza infinita; Variabili dumm 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:bot:quadip:118&r=ecm 
By:  Bai, Zhidong; Li, Hua; Wong, WingKeung 
Abstract:  The traditional(plugin) return for the Markowitz meanvariance (MV) optimization has been demonstrated to seriously overestimate the theoretical optimal return, especially when the dimension to sample size ratio $p/n$ is large. The newly developed bootstrapcorrected estimator corrects the overestimation, but it incurs the "underprediction problem," it does not do well on the estimation of the corresponding allocation, and it has bigger risk. To circumvent these limitations and to improve the optimal return estimation further, this paper develops the theory of spectralcorrected estimation. We first establish a theorem to explain why the plugin return greatly overestimates the theoretical optimal return. We prove that under some situations the plugin return is $\sqrt{\gamma}\ $\ times bigger than the theoretical optimal return, while under other situations, the plugin return is bigger than but may not be $\sqrt{\gamma}\ $\ times larger than its theoretic counterpart where $\gamma = \frac 1{1y}$ with $y$ being the limit of the ratio $p/n$. Thereafter, we develop the spectralcorrected estimation for the Markowitz MV model which performs much better than both the plugin estimation and the bootstrapcorrected estimation not only in terms of the return but also in terms of the allocation and the risk. We further develop properties for our proposed estimation and conduct a simulation to examine the performance of our proposed estimation. Our simulation shows that our proposed estimation not only overcomes the problem of "overprediction," but also circumvents the "underprediction," "allocation estimation," and "risk" problems. Our simulation also shows that our proposed spectralcorrected estimation is stable for different values of sample size $n$, dimension $p$, and their ratio $p/n$. In addition, we relax the normality assumption in our proposed estimation so that our proposed spectralcorrected estimators could be obtained when the returns of the assets being studied could follow any distribution under the condition of the existence of the fourth moments. 
Keywords:  Markowitz meanvariance optimization; Optimal Return; Optimal Portfolio Allocation; Large Random Matrix; Bootstrap Method 
JEL:  G11 C3 
Date:  2013–01–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:43862&r=ecm 
By:  Herrera Gómez, Marcos; Ruiz Marín, Manuel; Mur Lacambra, Jesús 
Abstract:  Testing the assumption of independence between variables is a crucial aspect of spatial data analysis. However, the literature is limited and somewhat confusing. To our knowledge, we can mention only the bivariate generalization of Moran’s statistic. This test suffers from several restrictions: it is applicable only to pairs of variables, a weighting matrix and the assumption of linearity are needed; the null hypothesis of the test is not totally clear. Given these limitations, we develop a new nonparametric test based on symbolic dynamics with better properties. We show that the test can be extended to a multivariate framework, it is robust to departures from linearity, it does not need a weighting matrix and can be adapted to different specifications of the null. The test is consistent, computationally simple and with good size and power, as shown by a Monte Carlo experiment. An application to the case of the productivity of the manufacturing sector in the Ebro Valley illustrates our approach. 
Keywords:  Nonparametric methods; Spatial bootstrapping; Spatial independence; Symbolic dynamics 
JEL:  C12 R12 C15 C21 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:43861&r=ecm 
By:  J. F. Muzy; R. Baile; E. Bacry 
Abstract:  In this paper we propose a new model for volatility fluctuations in financial time series. This model relies on a nonstationary gaussian process that exhibits aging behavior. It turns out that its properties, over any finite time interval, are very close to continuous cascade models. These latter models are indeed well known to reproduce faithfully the main stylized facts of financial time series. However, it involve a large scale parameter (the socalled "integral scale" where the cascade is initiated) that is hard to interpret in finance. Moreover the empirical value of the integral scale is in general deeply correlated to the overall length of the sample. This feature is precisely predicted by our model that turns out, as illustrated on various examples from daily stock index data, to quantitatively reproduce the empirical observations. 
Date:  2013–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1301.4160&r=ecm 
By:  Christophe Ley; Thomas Verdebout 
Abstract:  Onesample and multisample tests on the concentration parameter of FishervonMisesLangevin (FvML) distributions have been well studied in the literature. However,only very little is known about their behavior under local alternatives, whichis due to complications inherent to the curved nature of the parameter space. Theaim of the present paper therefore consists in filling that gap by having recourse tothe Le Cam methodology, which has been adapted from the linear to the sphericalsetup in Ley et al. (2013a). We obtain explicit expressions of the powers for the mostefficient one and multisample tests; these tests are those considered in Watamori andJupp (2005). As a nice byproduct, we are also able to write down the powers (againstlocal FvML alternatives) of the celebrated Rayleigh (1919) test of uniformity. A MonteCarlo simulation study confirms our theoretical findings and shows the finitesamplebehavior of the abovementioned procedures. 
Keywords:  concentration parameter; directional statistics; Fishervon MisesLangevin distributions; Le Cam's third Lemma; uniform local asymptotic normality 
Date:  2013–01 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/138256&r=ecm 
By:  Laura Mørch Andersen (Department of Food and Resource Economics, University of Copenhagen) 
Abstract:  It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed parameters this practice is very likely to cause misleading test results for the number of draws usually used today. The paper shows that increasing the number of draws is a very inefficient solution strategy requiring very large numbers of draws to ensure against misleading test statistics. The paper shows that using one dimensionally antithetic draws does not solve the problem but that the problem can be solved completely by using fully antithetic draws. The paper also shows that even when fully antithetic draws are used, models testing away mixing dimensions must replicate the relevant dimensions of the quasirandom draws in the simulation of the restricted likelihood. Again this is not standard in research or statistical programs. The paper therefore recommends using fully antithetic draws replicating the relevant dimensions of the quasirandom draws in the simulation of the restricted likelihood and that this should become the default option in statistical programs. 
Keywords:  QuasiMonte Carlo integration; Antithetic draws; Likelihood Ratio tests; simulated likelihood; panel Mixed MultiNomial Logit; Halton draws 
JEL:  C15 C25 
Date:  2013–01 
URL:  http://d.repec.org/n?u=RePEc:foi:wpaper:2013_1&r=ecm 
By:  Gürtler, Marc; Rauh, Ronald 
Abstract:  In this paper we analyze a multivariate nonstationary regression model empirically. With the knowledge about unconditional heteroscedasticty of financial returns, based on univariate studies and a congruent paradigm in Gürtler and Rauh (2009), we test for a timevarying covariance structure firstly. Based on these results, a central component of our nonstationary model is a kernel regression for pairwise covariances and the covariance matrix. Residual terms are fitted with an asymmetric Pearson type VII distribution. In an extensive study we estimate the linear dependence of a broad portfolio of equities and fixed income securities (including credit and currency risks) and fit the whole approach to provide distributional forecasts. Our evaluations verify a reasonable approximation and a satisfactory forecasting quality with an out performance against a traditional risk model.  
Keywords:  heteroscedasticity,nonstationarity,nonparametric regression,volatility,covariance matrix,innovation modeling,asymmetric heavytails,multivariate distributional forecast,empirical studies 
JEL:  C14 C5 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:zbw:tbsifw:if43v1&r=ecm 
By:  Nikolaos Zirogiannis (Department of Resource Economics, University of Massachusetts Amherst); Yorghos Tripodis (Department of Biostatistics, Boston University School of Public Health) 
Abstract:  We develop a generalized dynamic factor model for panel data with the goal of estimating an unobserved index. While similar models have been developed in the literature of dynamic factor analysis, our contribution is threefold. First, contrary to simple dynamic factor analysis where multiple attributes of the same subject are measured at each time period, our model also accounts for multiple subjects. It is therefore suitable to a panel data framework. Second, our model estimates a unique unobserved index for every subject for every time period, as opposed to previous work where a temporal index common to all subjects was used. Third, we develop a novel iterative estimation process which we call the TwoCycle Conditional ExpectationMaximization (2CCEM) algorithm and is flexible enough to handle a variety of different types of datasets. The model is applied on a panel measuring attributes related to the operation of water and sanitation utilities. 
Keywords:  Dynamic Factor Models, EM algorithm, Panel Data, StateSpace models, IBNET 
JEL:  C32 C33 C51 Q25 
Date:  2013–01 
URL:  http://d.repec.org/n?u=RePEc:dre:wpaper:20131&r=ecm 
By:  Ricardo Mora; Iliana Reggio 
Abstract:  The core assumption to identify the treatment effect in differenceindifferences estimators is the socalled Parallel Paths assumption, namely that the average change in outcome for the treated in the absence of treatment equals the average change in outcome for the nontreated. We define a family of alternative Parallel assumptions and show for a number of frequently used empirical specifications which parameters of the model identify the treatment effect under the alternative Parallel assumptions. We further propose a fully flexible model which has two desirable features not present in the usual econometric specifications implemented in applied research. First, it allows for flexible dynamics and for testing restrictions on these dynamics. Second, it does not impose equivalence between alternative Parallel assumptions. We illustrate the usefulness of our approach by revising the results of several recent papers in which the differenceindifferences technique has been applied.The core assumption to identify the treatment effect in differenceindifferences estimators is the socalled Parallel Paths assumption, namely that the average change in outcome for the treated in the absence of treatment equals the average change in outcome for the nontreated. We define a family of alternative Parallel assumptions and show for a number of frequently used empirical specifications which parameters of the model identify the treatment effect under the alternative Parallel assumptions. We further propose a fully flexible model which has two desirable features not present in the usual econometric specifications implemented in applied research. First, it allows for flexible dynamics and for testing restrictions on these dynamics. Second, it does not impose equivalence between alternative Parallel assumptions. We illustrate the usefulness of our approach by revising the results of several recent papers in which the differenceindifferences technique has been applied 
Keywords:  Differenceindifferences, Parallel paths, Treatment effect 
Date:  2012–12 
URL:  http://d.repec.org/n?u=RePEc:cte:werepe:we1233&r=ecm 
By:  Nalan Basturk (Erasmus University Rotterdam); Cem Cakmakli (University of Amsterdam); Pinar Ceyhan (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam, and VU University Amsterdam) 
Abstract:  Changing time series properties of US inflation and economic activity are analyzed within a class of extended Phillips Curve (PC) models. First, the misspecification effects of mechanical removal of low frequency movements of these series on posterior inference of a basic PC model are analyzed using a Bayesian simulation based approach. Next, structural time series models that describe changing patterns in low and high frequencies and backward as well as forward inflation expectation mechanisms are incorporated in the class of extended PC models. Empirical results indicate that the proposed models compare favorably with existing Bayesian Vector Autoregressive and Stochastic Volatility models in terms of fit and predictive performance. Weak identification and dynamic persistence appear less important when time varying dynamics of high and low frequencies are carefully modeled. Modeling inflation expectations using survey data and adding level shifts and stochastic volatility improves substantially in sample fit and out of sample predictions. No evidence is found of a long run stable cointegration relation between US inflation and marginal costs. Tails of the complete predictive distributions indicate an increase in the probability of disinflation in recent years. 
Keywords:  New Keynesian Phillips curve; unobserved components; level shifts; inflation expectations 
JEL:  C11 C32 E31 E37 
Date:  2013–01–10 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20130011&r=ecm 
By:  Mikko S. Pakkanen (Aarhus University and CREATES) 
Abstract:  We study the asymptotic behavior of lattice power variations of twoparameter ambit fields that are driven by white noise. Our first result is a law of large numbers for such power variations. Under a constraint on the memory of the ambit field, normalized power variations are shown to converge to certain integral functionals of the volatility field associated to the ambit field, when the lattice spacing tends to zero. This law of large numbers holds also for thinned power variations that are computed by only including increments that are separated by gaps with a particular asympotic behavior. Our second result is a related stable central limit theorem for thinned power variations. Additionally, we provide concrete examples of ambit fields that satisfy the assumptions of our limit theorems. 
Keywords:  ambit field, power variation, law of large numbers, central limit theorem, chaos decomposition 
JEL:  C10 C14 
Date:  2013–10–01 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201301&r=ecm 
By:  René Garcia; Daniel MantillaGarcia; Lionel Martellini 
Abstract:  In this paper, we formally show that the crosssectional variance of stock returns is a consistent and asymptotically efficient estimator for aggregate idiosyncratic volatility. This measure has two key advantages: it is modelfree and observable at any frequency. Previous approaches have used monthly model based measures constructed from time series of daily returns. The newly proposed crosssectional volatility measure is a strong predictor for future returns on the aggregate stock market at the daily frequency. Using the crosssection of size and booktomarket portfolios, we show that the portfolios’ exposures to the aggregate idiosyncratic volatility risk predict the crosssection of expected returns. <P> 
Keywords:  Aggregate idiosyncratic volatility, crosssectional dispersion, prediction of market returns, 
Date:  2013–01–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2013s01&r=ecm 
By:  Audrone Virbickaite; M. Concepci\'on Aus\'in; Pedro Galeano 
Abstract:  A Bayesian nonparametric approach for efficient risk management is proposed. A dynamic model is considered where optimal portfolio weights and hedging ratios are adjusted at each period. The covariance matrix of the returns is described using an asymmetric MGARCH model. Restrictive parametric assumptions for the errors are avoided by relying on Bayesian nonparametric methods, which allow for a better evaluation of the uncertainty in financial decisions. Illustrative risk management problems using real data are solved. Significant differences in posterior distributions of the optimal weights and ratios are obtained arising from different assumptions for the errors in the time series model. 
Date:  2013–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1301.5129&r=ecm 
By:  David.E. Allen (Edith Cowan University, Australia); Mohammad.A. Ashraf (Indian Institute of Technology, Kharagpur, India); Michael. McAleer (Erasmus University Rotterdam, Complutense University of Madrid, Spain, and Kyoto University, Japan); Robert.J. Powell (Edith Cowan University, Australia); Abhay K. Singh (Edith Cowan University, Australia) 
Abstract:  This paper features the application of a novel and recently developed method of statistical and mathematical analysis to the assessment of financial risk: namely Regular Vine copulas. Dependence modelling using copulas is a popular tool in financial applications, but is usually applied to pairs of securities. Vine copulas offer greater exibility and permit the modelling of complex dependency patterns using the rich variety of bivariate copulas which can be arranged and analysed in a tree structure to facilitate the analysis of multiple dependencies. We apply Regular Vine copula analysis to a sample of stocks comprising the Dow Jones Index to assess their interdependencies and to assess how their correlations change in different economic circumstances using three different sample periods: preGFC (Jan 2005 July 2007), GFC (July 2007Sep 2009), and postGFC periods (Sep 2009  Dec 2011). The empirical results suggest that the dependencies change in a complex manner, and there is evidence of greater reliance on the Student <I>t</I> copula in the copula choice within the tree structures for the GFC period, which is consistent with the existence of larger tails in the distributions of returns for this period. One of the attractions of this approach to risk modelling is the exibility in the choice of distributions used to model codependencies. 
Keywords:  Regular Vine Copulas; Tree structures; Codependence modelling 
JEL:  G11 C02 
Date:  2013–01–22 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20130022&r=ecm 