
on Econometrics 
By:  Tue Gørgens (Australian National University); Allan Würtz (CREATES and School of Economics and Management, University of Aarhus) 
Abstract:  This paper develops a specification test for functional form for models identified by moment restrictions, including IV and GMM settings. The general framework is one where the moment restrictions are specified as functions of data, a finitedimensional parameter vector, and a nonparametric real function (an infinitedimensional parameter vector). The null hypothesis is that the real function is parametric. The test is relatively easy to implement and its asymptotic distribution is known. The test performs well in simulation experiments. 
Keywords:  Generalized method of moments, specification test, nonparametric alternative, LM statistic, generalized arcsine distribution 
JEL:  C12 C14 C52 
Date:  2009–01–11 
URL:  http://d.repec.org/n?u=RePEc:aah:create:200954&r=ecm 
By:  Michael Jansson (UC Berkeley and CREATES); Morten Ørregaard Nielsen (Queen's University and CREATES) 
Abstract:  In an important generalization of zero frequency autore gressive unit root tests, Hylleberg, Engle, Granger, and Yoo (1990) developed regressionbased tests for unit roots at the seasonal frequencies in quarterly time series. We develop likelihood ratio tests for seasonal unit roots and show that these tests are "nearly efficient" in the sense of Elliott, Rothenberg, and Stock (1996), i.e. that their local asymptotic power functions are indistinguishable from the Gaussian power envelope. Currently available nearly efficient testing procedures for seasonal unit roots are regressionbased and require the choice of a GLS detrending parameter, which our likelihood ratio tests do not. 
Keywords:  Likelihood Ratio Test, Seasonal Unit Root Hypothesis 
JEL:  C12 C22 
Date:  2009–11–24 
URL:  http://d.repec.org/n?u=RePEc:aah:create:200955&r=ecm 
By:  Manfred Gilli; Enrico Schumann 
Abstract:  Linear regression is widelyused in finance. While the standard method to obtain parameter estimates, Least Squares, has very appealing theoretical and numerical properties, obtained estimates are often unstable in the presence of extreme observations which are rather common in financial time series. One approach to deal with such extreme observations is the application of robust or resistant estimators, like Least Quantile of Squares estimators. Unfortunately, for many such alternative approaches, the estimation is much more difficult than in the Least Squares case, as the objective function is not convex and often has many local optima. We apply different heuristic methods like Differential Evolution, Particle Swarm and Threshold Accepting to obtain parameter estimates. Particular emphasis is put on the convergence properties of these techniques for fixed computational resources, and the techniques’ sensitivity for different parameter settings. 
Keywords:  Optimisation heuristics, Robust Regression, Least Median of Squares 
Date:  2009–07–08 
URL:  http://d.repec.org/n?u=RePEc:com:wpaper:011&r=ecm 
By:  Francesco Battaglia; Mattheos Protopapas 
Abstract:  Many time series exhibit both nonlinearity and nonstationarity. Though both features have often been taken into account separately, few attempts have been proposed to model them simultaneously. We consider threshold models, and present a general model allowing for different regimes both in time and in levels, where regime transitions may happen according to selfexciting, or smoothly varying, or piecewise linear threshold modeling. Since fitting such a model involves the choice of a large number of structural parameters, we propose a procedure based on genetic algorithms, evaluating models by means of a generalized identification criterion. The performance of the proposed procedure is illustrated with a simulation study and applications to some real data. 
Keywords:  Nonlinear time series; Nonstationary time series; Threshold model 
Date:  2009–02–20 
URL:  http://d.repec.org/n?u=RePEc:com:wpaper:009&r=ecm 
By:  Zhongfang He; John M. Maheu 
Abstract:  A sequential Monte Carlo method for estimating GARCH models subject to an unknown number of structural breaks is proposed. Particle filtering techniques allow for fast and efficient updates of posterior quantities and forecasts in real time. The method conveniently deals with the path dependence problem that arises in these type of models. The performance of the method is shown to work well using simulated data. Applied to daily NASDAQ returns, the evidence favors a partial structural break specification in which only the intercept of the conditional variance equation has breaks compared to the full structural break specification in which all parameters are subject to change. The empirical application underscores the importance of model assumptions when investigating breaks. A model with normal return innovations results in strong evidence of breaks; while more flexible return distributions such as tinnovations or a GARCHjump mixture model still favors breaks but indicates much more uncertainty regarding the time and impact of them. 
Keywords:  Econometric and statistical methods; Financial markets 
JEL:  C11 C15 C22 C53 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:0931&r=ecm 
By:  Florian Heinen (Leibniz University of Hannover); Philipp Sibbertsen (Leibniz University of Hannover); Robinson Kruse (Aarhus University and CREATES) 
Abstract:  We consider the problem of forecasting time series with long memory when the memory parameter is subject to a structural break. By means of a largescale Monte Carlo study we show that ignoring such a change in persistence leads to substantially reduced forecasting precision. The strength of this effect depends on whether the memory parameter is increasing or decreasing over time. A comparison of six forecasting strategies allows us to conclude that pretesting for a change in persistence is highly recommendable in our setting. In addition we provide an empirical example which underlines the importance of our findings. 
Keywords:  Long memory time series, Break in persistence, Structural change, Simulation, Forecasting competition 
JEL:  C15 C22 C53 
Date:  2009–11–17 
URL:  http://d.repec.org/n?u=RePEc:aah:create:200953&r=ecm 
By:  Torben G. Andersen (Northwestern Univ., NBER, CREATES); Dobrislav Dobrev (Federal Reserve Board of Governors); Ernst Schaumburg (Federal Reserve Bank of New York) 
Abstract:  We propose two new jumprobust estimators of integrated variance based on highfrequency return observations. These MinRV and MedRV estimators provide an attractive alternative to the prevailing bipower and multipower variation measures. Specifically, the MedRV estimator has better theoretical efficiency properties than the tripower variation measure and displays better finitesample robustness to both jumps and the occurrence of “zero” returns in the sample. Unlike the bipower variation measure, the new estimators allow for the development of an asymptotic limit theory in the presence of jumps. Finally, they retain the local nature associated with the low order multipower variation measures. This proves essential for alleviating finite sample biases arising from the pronounced intraday volatility pattern which afflict alternative jumprobust estimators based on longer blocks of returns. An empirical investigation of the Dow Jones 30 stocks and an extensive simulation study corroborate the robustness and efficiency properties of the new estimators. 
Keywords:  Highfrequency data, Integrated variance, Finite activity jumps, Realized volatility, Jump robustness, Nearest neighbor truncation 
JEL:  C14 C15 C22 C80 G10 
Date:  2009–10–31 
URL:  http://d.repec.org/n?u=RePEc:aah:create:200952&r=ecm 
By:  Karsten R. Gerdrup (Norges Bank (Central Bank of Norway)); Anne Sofie Jore (Norges Bank (Central Bank of Norway)); Christie Smith (Reserve Bank of New Zealand); Leif Anders Thorsrud (Norges Bank (Central Bank of Norway)) 
Abstract:  Forecast combination has become popular in central banks as a means to improve forecasts and to alleviate the risk of selecting poor models. However, if a model suite is populated with many similar models, then the weight attached to other independent models may be lower than warranted by their performance. One way to mitigate this problem is to group similar models into distinct `ensembles'. Using the original suite of models in Norges Bank's system for averaging models (SAM), we evaluate whether forecast performance can be improved by combining ensemble densities, rather than combining individual model densities directly. We evaluate performance both in terms of point forecasts and density forecasts, and test whether the densities are wellcalibrated. We find encouraging results for combining ensembles. 
Keywords:  forecasting, density combination; model combination; clustering; ensemble density; pits. 
JEL:  C52 C53 E52 
Date:  2009–11–11 
URL:  http://d.repec.org/n?u=RePEc:bno:worpap:2009_19&r=ecm 
By:  Valeri Voev (Aarhus University and CREATES) 
Abstract:  We analyze the applicability of economic criteria for volatility forecast evaluation based on unconditional measures of portfolio performance. The main theoretical finding is that such unconditional measures generally fail to rank conditional forecasts correctly due to the presence of a bias term driven by the variability of the conditional mean and portfolio weights. Simulations and a small empirical study suggest that the bias can be empirically substantial and lead to distortions in forecast evaluation. An important implication is that forecasting superiority of models using high frequency data is likely to be understated if unconditional criteria are used. 
Keywords:  Forecast evaluation, Volatility forecasting, Portfolio optimization, Meanvariance analysis 
JEL:  C32 C53 G11 
Date:  2009–11–24 
URL:  http://d.repec.org/n?u=RePEc:aah:create:200956&r=ecm 
By:  Marmer, Vadim; Shneyerov, Artyom 
Abstract:  This paper contains supplemental materials for Marmer and Shneyerov (2009) "QuantileBased Nonparametric Inference for FirstPrice Auctions." 
Keywords:  Firstprice auctions, independent private values, nonparametric estimation, kernel estimation, quantiles, optimal reserve price, bootstrap 
Date:  2009–11–24 
URL:  http://d.repec.org/n?u=RePEc:ubc:pmicro:vadim_marmer200961&r=ecm 
By:  Alberto Padilla 
Abstract:  Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random  systematic sample, an unbiased estimator of the population variance for simple random sample is proposed without model assumptions. Some examples are given. 
Keywords:  Variance estimator, systematic sampling, simple random sampling, random order. 
JEL:  C80 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:bdm:wpaper:200913&r=ecm 
By:  Sébastien Laurent; Jeroen Rombouts; Francesco Violente 
Abstract:  A large number of parameterizations have been proposed to model conditional variance dynamics in a multivariate framework. However, little is known about the ranking of multivariate volatility models in terms of their forecasting ability. The ranking of multivariate volatility models is inherently problematic because it requires the use of a proxy for the unobservable volatility matrix and this substitution may severely affect the ranking. We address this issue by investigating the properties of the ranking with respect to alternative statistical loss functions used to evaluate model performances. We provide conditions on the functional form of the loss function that ensure the proxybased ranking to be consistent for the true one  i.e., the ranking that would be obtained if the true variance matrix was observable. We identify a large set of loss functions that yield a consistent ranking. In a simulation study, we sample data from a continuous time multivariate diffusion process and compare the ordering delivered by both consistent and inconsistent loss functions. We further discuss the sensitivity of the ranking to the quality of the proxy and the degree of similarity between models. An application to three foreign exchange rates, where we compare the forecasting performance of 16 multivariate GARCH specifications, is provided. <P>Un grand nombre de méthodes de paramétrage ont été proposées dans le but de modéliser la dynamique de la variance conditionnelle dans un cadre multivarié. Toutefois, on connaît peu de choses sur le classement des modèles de volatilité multivariés, du point de vue de leur capacité à permettre de faire des prédictions. Le classement des modèles de volatilité multivariés est forcément problématique du fait qu’il requiert l’utilisation d’une valeur substitutive pour la matrice de la volatilité non observable et cette substitution peut influencer sérieusement le classement. Nous abordons ce problème en examinant les propriétés du classement en relation avec les fonctions de perte statistiques alternatives utilisées pour évaluer la performance des modèles. Nous présentons des conditions liées à la forme fonctionnelle de la fonction de perte qui garantissent que le classement fondé sur une valeur de substitution est constant par rapport au classement réel, c’estàdire à celui qui serait obtenu si la matrice de variance réelle était observable. Nous établissons un vaste ensemble de fonctions de perte qui produisent un classement constant. Dans le cadre d’une étude par simulation, nous fournissons un échantillon de données à partir d’un processus de diffusion multivarié en temps continu et comparons l’ordre généré par les fonctions de perte constantes et inconstantes. Nous approfondissons la question de la sensibilité du classement à la qualité de la substitution et le degré de ressemblance entre les modèles. Une application à trois taux de change est proposée et, dans ce contexte, nous comparons l’efficacité de prédiction de 16 paramètres du modèle GARCH multivarié (approche d’hétéroscédasticité conditionnelle autorégressive généralisée). 
Keywords:  Volatility, multivariate GARCH, matrix norm, loss function, model confidence set, Volatilité, modèle GARCH multivarié, norme matricielle, fonction de perte, ensemble de modèles de confiance. 
Date:  2009–11–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2009s45&r=ecm 
By:  Sébastien TERRA (INSEE Auvergne) 
Abstract:  In this paper, we provide a new framework to assess the validity of Zipf 's Law for cities. Zipf 's Law states that, within a country, the distribution of city sizes follows a Pareto distribution with a Pareto index equal to 1. We adopt a twostep approach where we formally test if the distribution of city sizes is a Pareto distribution and then we estimate the Pareto index. Through Monte Carlo experiments, we investigate the nite sample performances of this testing procedure and we compare the smallsample properties of a new estimator (the minimum variance unbiased estimator) to those of commonly used estimators. The minimum variance unbiased estimator turns out to be more efficient and unbiased. We use this twostep approach to examine empirically the validity of Zipf 's Law on a sample of 115 countries. Zipf 's Law is not rejected in most countries (62 out of 115, or 53.9%). 
Keywords:  Minimum variance unbiased estimator, Monte Carlo study, Pareto distribution, Zipf 's Law, developing countries 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:cdi:wpaper:1102&r=ecm 
By:  W. Robert Reed (University of Canterbury); Rachel S. Webb 
Abstract:  Nonspherical errors, namely heteroscedasticity, serial correlation and crosssectional correlation are commonly present within panel data sets. These can cause significant problems for econometric analyses. The FGLS(Parks) estimator has been demonstrated to produce considerable efficiency gains in these settings. However, it suffers from underestimation of coefficient standard errors, oftentimes severe. Potentially, jackknifing the FGLS(Parks) estimator could allow one to maintain the efficiency advantages of FGLS(Parks) while producing more reliable estimates of coefficient standard errors. Accordingly, this study investigates the performance of the jackknife estimator of FGLS(Parks) using Monte Carlo experimentation. We find that jackknifing can  in narrowly defined situations  substantially improve the estimation of coefficient standard errors. However, its overall performance is not sufficient to make it a viable alternative to other panel data estimators. 
Keywords:  Panel Data estimation; Parks model; crosssectional correlation; jackknife; Monte Carlo 
JEL:  C23 C15 
Date:  2009–11–15 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:09/18&r=ecm 
By:  Sergey S. Stepanov 
Abstract:  The problem of nonstationarity in financial markets is discussed and related to the dynamic nature of price volatility. A new measure is proposed for estimation of the current asset volatility. A simple and illustrative explanation is suggested of the emergence of significant serial autocorrelations in volatility and squared returns. It is shown that when nonstationarity is eliminated, the autocorrelations substantially reduce and become statistically insignificant. The causes of nonGaussian nature of the probability of returns distribution are considered. For both stock and currency markets data samples, it is shown that removing the nonstationary component substantially reduces the kurtosis of distribution, bringing it closer to the Gaussian one. A statistical criterion is proposed for controlling the degree of smoothing of the empirical values of volatility. The hypothesis of smooth, nonstochastic nature of volatility is put forward, and possible causes of volatility shifts are discussed. 
Date:  2009–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:0911.5048&r=ecm 
By:  Bago d'Uva, T; Lindeboom, M; O'Donnell, O; van Doorslaer E 
Abstract:  Anchoring vignettes are increasingly used to identify and correct heterogeneity in the reporting of health, work disability, life satisfaction, political efficacy, etc. with the aim of improving interpersonal comparability of subjective indicators of these constructs. The method relies on two assumptions: vignette equivalence – the vignette description is perceived by all to correspond to the same state; and, response consistency  individuals use the same response scales to rate the vignettes and their own situation. We propose tests of these assumptions. For vignette equivalence, we test a necessary condition of no systematic variation with observed characteristics in the perceived difference in states corresponding to any two vignettes. To test response consistency we rely on the assumption that objective indicators fully capture the covariation between the construct of interest and observed individual characteristics, and so offer an alternative way to identify response scales, which can then be compared with those identified from the vignettes. We also introduce a weaker test that is valid under a less stringent assumption. We apply these tests to cognitive functioning and mobility related health problems using data from the English Longitudinal Survey of Ageing. Response consistency is rejected for both health domains according to the first test, but the weaker test does not reject for cognitive functioning. The necessary condition for vignette equivalence is rejected for both health domains. These results cast some doubt on the validity of the vignettes approach, at least as applied to these health domains. 
Keywords:  Reporting heterogeneity; Survey methods; Vignettes; Health; Cognition 
JEL:  C35 C42 I12 
Date:  2009–10 
URL:  http://d.repec.org/n?u=RePEc:yor:hectdg:09/30&r=ecm 