
on Econometrics 
By:  Rasmus Tangsgaard Varneskov (Aarhus University and CREATES) 
Abstract:  This paper introduces a new class of generalized attop realized kernels for estimation of quadratic variation in the presence of market microstructure noise that is allowed to exhibit a nontrivial dependence structure and to be correlated with the ecient price process. The estimators in this class are shown to be consistent, asymptotically unbiased, and mixed gaussian with an optimal n^(1/4)convergence rate. In addition, an ecient and asymptotically normal estimator of the long run variance of the market microstructure noise is provided along with novel and consistent estimators of the asymptotic variance of the attop realized kernels and of the integrated quarticity, respectively, creating a powerful, unied framework for analyzing quadratic variation. A nite sample correction ensures nonnegativity of the attop realized kernels without a ecting asymptotic properties. Lastly, in an extensive simulation study, important practical issues such as the choice of kernel function and tuning parameters are addressed, the adequacy of the asymptotic distribution in nite samples is assessed, and it is shown that estimators in this class exhibit a superior bias and root mean squared error tradeo relative to competing estimators. The impact of using various realized estimators is illustrated in a small empirical application to noisy high frequency stock market data. 
Keywords:  Bias Reduction, Nonparametric Estimation, Market Microstructure Noise, Quadratic Variation. 
JEL:  C14 C15 C50 
Date:  2011–09–01 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201131&r=ecm 
By:  Christophe Ley; YvesCaoimhin Swan; Baba Thiam; Thomas Verdebout 
Abstract:  In this paper, we provide Restimators of the location of a rotationally symmetric distribution on the unit sphere of Rk. In order to do so we first prove the local asymptotic normality property of a sequence of rotationally symmetric models; this is a non standard result due to the curved nature of the unit sphere. We then construct our estimators by adapting the Le Cam onestep methodology to spherical statistics and ranks. We show that they are asymptotically normal under any rotationally symmetric distribution and achieve the efficiency bound under a specific density. Their small sample behavior is studied via a Monte Carlo simulation and our methodology is illustrated on geological data. 
Keywords:  local asymptotic normality; rankbased methods; Restimation; spherical statistics 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/96823&r=ecm 
By:  Firmin Doko Tchatoka (School of Economics and Finance, University of Tasmania) 
Abstract:  This paper investigates the asymptotic size properties of robust subset tests when instruments are left out of the analysis. Recently, robust subset procedures have been developed for testing hypotheses which are specified on the subsets of the structural parameters or on the parameters associated with the included exogenous variables. It has been shown that they never overreject the true parameter values even when nuisance parameters are not identified. However, their robustness to instrument exclusion has not been investigated. Instrument exclusion is an important problem in econometrics and there are at least two reasons to be concerned. Firstly, it is difficult in practice to assess whether an instrument has been omitted. For example, some components of the “identifying” instruments that are excluded from the structural equation may be quite uncertain or “left out” of the analysis. Secondly, in many instrumental variable (IV) applications, an infinite number of instruments are available for use in large sample estimation. This is particularly the case with most time series models. If a given variable, say Xt, is a legitimate instrument, so too are its lags Xt1; Xt2. Hence, instrument exclusion seems highly likely in most practical situations. After formulating a general asymptotic framework which allows one to study this issue in a convenient way, I consider two main setups: (1) the missing instruments are (possibly) relevant, and, (2) they are asymptotically weak. In both setups, I show that all subset procedures studied are in general consistent against instrument inclusion (hence asymptotically invalid for the subset hypothesis of interest). I characterize cases where consistency may not hold, but the asymptotic distribution is modified in a way that would lead to size distortions in large samples. I propose a “rule of thumb” which allows to practitioners to know whether a missing instrument is detrimental or not to subset procedures. I present a Monte Carlo experiment confirming that the subset procedures are unreliable when instruments are missing. 
Keywords:  REPEC, 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:tas:wpaper:10668&r=ecm 
By:  Yushu Li (Centre for Labour Market Policy Research (CAFO), Department of Economic and Statistics) 
Abstract:  Detection turning points in unimodel has various applications to time series which have cyclic periods. Related techniques are widely explored in the field of statistical surveillance, that is, online turning point detection procedures. This paper will first present a power controlled turning point detection method based on the theory of the likelihood ratio test in statistical surveillance. Next we show how outliers will influence the performance of this methodology. Due to the sensitivity of the surveillance system to outliers, we finally present a wavelet multiresolution (MRA) based outlier elimination approach, which can be combined with the online turning point detection process and will then alleviate the false alarm problem introduced by the outliers. 
Keywords:  Unimodel, Turning point, Statistical surveillance, Outlier, Wavelet multiresolution, Threshold. 
JEL:  C12 C15 C22 
Date:  2011–07–15 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201129&r=ecm 
By:  Alan I. Barreca; Jason M. Lindo; Glen R. Waddell 
Abstract:  This study uses Monte Carlo simulations to demonstrate that regressiondiscontinuity designs arrive at biased estimates when attributes related to outcomes predict heaping in the running variable. After showing that our usual diagnostics are poorly suited to identifying this type of problem, we provide alternatives. We also demonstrate how the magnitude and direction of the bias varies with bandwidth choice and the location of the data heaps relative to the treatment threshold. Finally, we discuss approaches to correcting for this type of problem before considering these issues in several nonsimulated environments. 
JEL:  C14 C21 I12 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:17408&r=ecm 
By:  Drichoutis, Andreas 
Abstract:  Interaction terms are often misinterpreted in the empirical economics literature by assuming that the coefficient of interest represents unconditional marginal changes. I present the correct way to estimate conditional marginal changes in a series of nonlinear models including (ordered) logit/probit regressions, censored and truncated regressions. The linear regression model is used as the benchmark case. 
Keywords:  interaction terms; ordered probit; ordered logit; truncated regression; censored regression; nonlinear models 
JEL:  C51 C12 C24 C25 
Date:  2011–07 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:33251&r=ecm 
By:  Anders Bredahl Kock (Aarhus University and CREATES); Timo Teräsvirta (Aarhus University and CREATES) 
Abstract:  In this paper we consider the forecasting performance of a welldefined class of flexible models, the socalled single hiddenlayer feedforward neural network models. A major aim of our study is to find out whether they, due to their flexibility, are as useful tools in economic forecasting as some previous studies have indicated. When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. In fact, their parameters are not even globally identified. Recently, White (2006) presented a solution that amounts to converting the specification and nonlinear estimation problem into a linear model selection and estimation problem. He called this procedure the QuickNet and we shall compare its performance to two other procedures which are built on the linearisation idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting should be carried out recursively or directly. Comparisons of these two methodss exist for linear models and here these comparisons are extended to neural networks. Finally, a nonlinear model such as the neural network model is not appropriate if the data is generated by a linear mechanism. Hence, it might be appropriate to test the null of linearity prior to building a nonlinear model. We investigate whether this kind of pretesting improves the forecast accuracy compared to the case where this is not done. 
Keywords:  artificial neural network, forecast comparison, model selection, nonlinear autoregressive model, nonlinear time series, root mean square forecast error, Wilcoxon’s signedrank test 
JEL:  C22 C45 C52 C53 
Date:  2011–08–26 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201127&r=ecm 
By:  Stefano Grassi (Aarhus University and CREATES); Tommaso Proietti (University of Sydney) 
Abstract:  An important issue in modelling economic time series is whether key unobserved components representing trends, seasonality and calendar components, are deterministic or evolutive. We address it by applying a recently proposed Bayesian variable selection methodology to an encompassing linear mixed model that features, along with deterministic effects, additional random explanatory variables that account for the evolution of the underlying level, slope, seasonality and trading days. Variable selection is performed by estimating the posterior model probabilities using a suitable Gibbs sampling scheme. The paper conducts an extensive empirical application on a large and representative set of monthly time series concerning industrial production and retail turnover. We find strong support for the presence of stochastic trends in the series, either in the form of a timevarying level, or, less frequently, of a stochastic slope, or both. Seasonality is a more stable component: only in 70% of the cases we were able to select at least one stochastic trigonometric cycle out of the six possible cycles. Most frequently the time variation is found in correspondence with the fundamental and the first harmonic cycles. An interesting and intuitively plausible finding is that the probability of estimating timevarying components increases with the sample size available. However, even for very large sample sizes we were unable to find stochastically varying calendar effects. 
Keywords:  Bayesian model selection, stationarity, unit roots, stochastic trends, variable selection. 
JEL:  E32 E37 C53 
Date:  2011–09–02 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201130&r=ecm 
By:  Anders Bredahl Kock (Aarhus University and CREATES); Timo Teräsvirta (Aarhus University and CREATES) 
Abstract:  In this work we consider forecasting macroeconomic variables dur ing an economic crisis. The focus is on a specific class of models, the socalled single hiddenlayer feedforward autoregressive neural net work models. What makes these models interesting in the present context is that they form a class of universal approximators and may be expected to work well during exceptional periods such as major economic crises. These models are often difficult to estimate, and we follow the idea of White (2006) to transform the speci?fication and non linear estimation problem into a linear model selection and estimation problem. To this end we employ three automatic modelling devices. One of them is White's QuickNet, but we also consider Autometrics, well known to time series econometricians, and the Marginal Bridge Estimator, better known to statisticians and microeconometricians. The performance of these three model selectors is compared by look ing at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment se ries of the G7 countries and the four Scandinavian ones, and focus on forecasting during the economic crisis 20072009. Forecast accuracy is measured by the root mean square forecast error. Hypothesis testing is also used to compare the performance of the different techniques with each other. 
Keywords:  Autometrics, economic forecasting, Marginal Bridge estimator, neural network, nonlinear time series model, Wilcoxon's signedrank test 
JEL:  C22 C45 C52 C53 
Date:  2011–08–26 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201128&r=ecm 
By:  Andreea Halunga; Chris D. Orme; Takashi Yamagata 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:man:sespap:1118&r=ecm 
By:  Oleg Sokolinskiy (Erasmus University Rotterdam); Dick van Dijk (Erasmus University Rotterdam) 
Abstract:  This paper develops a novel approach to modeling and forecasting realized volatility (RV) measures based on copula functions. Copulabased time series models can capture relevant characteristics of volatility such as nonlinear dynamics and longmemory type behavior in a flexible yet parsimonious way. In an empirical application to daily volatility for S&P500 index futures, we find that the copulabased RV (CRV) model outperforms conventional forecasting approaches for oneday ahead volatility forecasts in terms of accuracy and efficiency. Among the copula specifications considered, the Gumbel CRV model achieves the best forecast performance, which highlights the importance of asymmetry and upper tail dependence for modeling volatility dynamics. Although we find substantial variation in the copula parameter estimates over time, conditional copulas do not improve the accuracy of volatility forecasts. 
Keywords:  Nonlinear dependence; long memory; copulas; volatility forecasting 
JEL:  C22 C53 
Date:  2011–09–05 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110125&r=ecm 
By:  Peter Fuleky (UHERO, and Department of Economics, University of Hawaii at Manoa) 
Abstract:  When estimating the parameters of a process, researchers can choose the reference unit of time (unit period) for their study. Frequently, they set the unit period equal to the observation interval. However, I show that decoupling the unit period from the observation interval facilitates the comparison of parameter estimates across studies with different data sampling frequencies. If the unit period is standardized (for example annualized) across these studies, then the parameters will represent the same attributes of the underlying process, and their interpretation will be independent of the sampling frequency. 
Keywords:  Unit Period, Sampling Frequency, Bias, Time Series. 
JEL:  C13 C22 C51 C82 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:hae:wpaper:20114&r=ecm 
By:  Jaqueson K. Galimberti; Marcelo L. Moura 
Abstract:  Incorporating survey forecasts to a forecastaugmented HodrickPrescott filter, we evidence a considerable improvement to the reliability of US outputgap estimation in realtime. Odds of extracting wrong signals of outputgap estimates are found to reduce by almost a half, and the magnitude of revisions to these estimates accounts to only three fifths of the outputgap average size, usually an onebyone ratio. We further analyze how this endofsample uncertainty evolves as time goes on and observations accumulate, showing that a 90% rate of correct assessments of the outputgap sign can be attained with five quarters of delay using survey forecasts. 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:man:cgbcrp:159&r=ecm 
By:  Luis E. Rojas 
Abstract:  This paper derives a link between the forecasts of professional forecasters and a DSGE model. I show that the forecasts of a professional forecaster can be incorporated to the state space representation of the model by allowing the measurement error of the forecast and the structural shocks to be correlated. The parameters capturing this correlation are reduced form parameters that allow to address two issues i) How the forecasts of the professional forecaster can be exploited as a source of information for the estimation of the model and ii) How to characterize the deviations of the professional forecaster from an ideal complete information forecaster in terms of the shocks and the structure of the economy. 
Date:  2011–08–15 
URL:  http://d.repec.org/n?u=RePEc:col:000094:008945&r=ecm 
By:  Gaurab Aryal; Isabelle Perrigne; Quang Vuong 
Abstract:  In contrast to Aryal, Perrigne and Vuong (2009), this note shows that in an insurance model with multidimensional screening when only information on whether the insuree has been involved in some accident is available, the joint distribution of risk and risk aversion is not identified. 
JEL:  C14 L62 D82 D86 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:acb:cbeeco:2011552&r=ecm 