
on Econometrics 
By:  Morten Ørregaard Nielsen (School of Economics and Management, University of Aarhus, Denmark) 
Abstract:  This paper presents a family of simple nonparametric unit root tests indexed by one parameter, d, and containing Breitung’s (2002) test as the special case d = 1. It is shown that (i) each member of the family with d > 0 is consistent, (ii) the asymptotic distribution depends on d, and thus reflects the parameter chosen to implement the test, and (iii) since the asymptotic distribution depends on d and the test remains consistent for all d > 0, it is possible to analyze the power of the test for different values of d. The usual PhillipsPerron or DickeyFuller type tests are indexed by bandwidth, lag length, etc., but have none of these three properties. It is shown that members of the family with d < 1 have higher asymptotic local power than the Breitung (2002) test, and when d is small the asymptotic local power of the proposed nonparametric test is relatively close to the parametric power envelope, particularly in the case with a linear timetrend. Furthermore, GLS detrending is shown to improve power when d is small, which is not the case for Breitung’s (2002) test. Simulations demonstrate that when applying a sieve bootstrap procedure, the proposed variance ratio test has very good size properties, with finite sample power that is higher than that of Breitung’s (2002) test and even rivals the (nearly) optimal parametric GLS detrended augmented DickeyFuller test with lag length chosen by an information criterion. 
Keywords:  Augmented DickeyFuller test, fractional integration, GLS detrending, nonparametric, nuisance parameter, tuning parameter, power envelope, unit root test, variance ratio 
JEL:  C22 
Date:  2008–06–30 
URL:  http://d.repec.org/n?u=RePEc:aah:create:200836&r=ecm 
By:  Karl Schlag 
Abstract:  We present an exact test for whether two random variables that have known bounds on their support are negatively correlated. The alternative hypothesis is that they are not negatively correlated. No assumptions are made on the underlying distributions. We show by example that the Spearman rank correlation test as the competing exact test of correlation in nonparametric settings rests on an additional assumption on the data generating process without which it is not valid as a test for correlation. We then show how to test for the signi.cance of the slope in a linear regression analysis that invovles a single independent variable and where outcomes of the dependent variable belong to a known bounded set. 
Keywords:  Correlation test, exact hypothesis testing, distributionfree, nonparametric, simple linear regression 
JEL:  C12 C14 C01 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:upf:upfgen:1097&r=ecm 
By:  Masayuki Hirukawa (Department of Economics, Northern Illinois University); Nikolay Gospodinov (Department of Economics, Concordia University and CIREQ) 
Abstract:  This paper considers a nonstandard kernel regression for strongly mixing processes when the regressor is nonnegative. The nonparametric regression is implemented using asymmetric kernels [Gamma (Chen, 2000b), Inverse Gaussian and Reciprocal Inverse Gaussian (Scaillet, 2004) kernels] that possess some appealing properties such as lack of boundary bias and adaptability in the amount of smoothing. The paper investigates the asymptotic and finitesample properties of the asymmetric kernel NadarayaWatson, local linear, and reweighted NadarayaWatson estimators. Pointwise weak consistency, rates of convergence and asymptotic normality are established for each of these estimators. As an important economic application of asymmetric kernel regression estimators, we reexamine the problem of estimating scalar diffusion processes. 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2008cf573&r=ecm 
By:  PierreEric Treyens (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales  CNRS : UMR6579) 
Abstract:  We consider linear regression models and we suppose that disturbances are either Gaussian or non Gaussian. Then, by using Edgeworth expansions, we compute the exact errors in the rejection probability (ERPs) for all onerestriction tests (asymptotic and bootstrap) which can occur in these linear models. More precisely, we show that the ERP is the same for the asymptotic test as for the classical parametric bootstrap test it is based on as soon as the third cumulant is nonnul. On the other side, the non parametric bootstrap performs almost always better than the parametric bootstrap. There are two exceptions. The first occurs when the third and fourth cumulants are null, in this case parametric and non parametric bootstrap provide exactly the same ERPs, the second occurs when we perform a ttest or its associated bootstrap (parametric or not) in the models y =μ+u and y=ax+u where the disturbances have nonnull kurtosis coefficient and a skewness coefficient equal to zero. In that case, the ERPs of any test (asymptotic or bootstrap) we perform are of the same order.<br />Finally, we provide a new parametric bootstrap using the first four moments of the distribution of the residuals which is as accurate as a non parametric bootstrap which uses these first four moments implicitly. We will introduce it as the parametric bootstrap considering higher moments (CHM), and thus, we will speak about the CHM parametric bootstrap 
Keywords:  Non parametric bootstrap, Parametric Bootstrap, Cumulants, Skewness, kurtosis. 
Date:  2008–06–20 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00289456_v1&r=ecm 
By:  Erik Meijer; Susann Rohwedder; Tom Wansbeek 
Abstract:  The authors study the prediction of latent variables in a finite mixture of linear structural equation models. The latent variables can be viewed as welldefined variables measured with error or as theoretical constructs that cannot be measured objectively, but for which proxies are observed. The finite mixture component may serve different purposes: it can denote an unobserved segmentation in subpopulations such as market segments, or it can be used as a nonparametric way to estimate an unknown distribution. In the first interpretation, it forms an additional discrete latent variable in an otherwise continuous latent variable model. Different criteria can be employed to derive ÒoptimalÓ predictors of the latent variables, leading to a taxonomy of possible predictors. The authors derive the theoretical properties of these predictors. Special attention is given to a mixture that includes components with degenerate distributions. They then apply the theory to the optimal estimation of individual earnings when two independent observations are available: one from survey data and one from register data. The discrete components of the model represent observations with or without measurement error, and with either a correct match or a mismatch between the two data sources. 
Keywords:  factor scores, measurement error, finite mixture, validation study 
JEL:  J39 C39 C81 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:ran:wpaper:584&r=ecm 
By:  Joseph G. Altonji (Department of Economics, Yale University and NBER); Hidehiko Ichimura (Faculty of Economics, University of Tokyo); Taisuke Otsu (Faculty of Economics, Yale University) 
Abstract:  We present a simple way to estimate the effects of changes in a vector of observable variables X on a limited dependent variable Y when Y is a general nonseparable function of X and unobservables. We treat models in which Y is censored from above or below or potentially from both. The basic idea is to first estimate the derivative of the conditional mean of Y given X at x with respect to x on the uncensored sample without correcting for the effect of changes in x induced on the censored population. We then correct the derivative for the effects of the selection bias. We propose nonparametric and semiparametric estimators for the derivative. As extensions, we discuss the cases of discrete regressors, measurement error in dependent variables, and endogenous regressors in a cross section and panel data context. 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2008cf574&r=ecm 
By:  Wolfgang Härdle; Nikolaus Hautsch; Uta Pigorsch 
Abstract:  Measuring and modeling financial volatility is the key to derivative pricing, asset allocation and risk management. The recent availability of highfrequency data allows for refined methods in this field. In particular, more precise measures for the daily or lower frequency volatility can be obtained by summing over squared highfrequency returns. In turn, this socalled realized volatility can be used for more accurate model evaluation and description of the dynamic and distributional structure of volatility. Moreover, nonparametric measures of systematic risk are attainable, that can straightforwardly be used to model the commonly observed timevariation in the betas. The discussion of these new measures and methods is accompanied by an empirical illustration using highfrequency data of the IBM incorporation and of the DJIA index. 
Keywords:  Realized Volatility, Realized Betas, Volatility Modeling 
JEL:  C13 C14 C22 C52 C53 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008045&r=ecm 
By:  David Ardia (University of Fribourg); Lennart F. Hoogerheide (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam) 
Abstract:  This paper presents the R package AdMit which provides functions to approximate and sample from a certain target distribution given only a kernel of the target density function. The core algorithm consists in the function AdMit which fits an adaptive mixture of Studentt distributions to the density of interest via its kernel function. Then, importance sampling or the independence chain Metropolis Hastings algorithm are used to obtain quantities of interest for the target density, using the fitted mixture as the importance or candidate density. The estimation procedure is fully automatic and thus avoids the timeconsuming and difficult task of tuning a sampling algorithm. The relevance of the package is shown in two examples. The first aims at illustrating in detail the use of the functions provided by the package in a bivariate bimodal distribution. The second shows the relevance of the adaptive mixture procedure through the Bayesian estimation of a mixture of ARCH model fitted to foreign exchange logreturns data. The methodology is compared to standard cases of importance sampling and the MetropolisHastings algorithm using a naive candidate and with the GriddyGibbs approach. 
Keywords:  adaptive mixture; Studentt distributions; importance sampling; independence chain MetropolisHasting algorithm; Bayesian; R software 
JEL:  C11 C15 
Date:  2008–06–18 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20080062&r=ecm 
By:  Rangan Gupta (Department of Economics, University of Pretoria); Alain Kabundi (Department of Economics and Econometrics, University of Johannesburg) 
Abstract:  This paper uses twotypes of largescale models, namely the Dynamic Factor Model (DFM) and Bayesian Vector Autoregressive (BVAR) Models based on alternative hyperparameters specifying the prior, which accommodates 267 macroeconomic time series, to forecast key macroeconomic variables of a small open economy. Using South Africa as a case study and per capita growth rate, inflation rate, and the shortterm nominal interest rate as our variables of interest, we estimate the twotypes of models over the period 1980Q1 to 2006Q4, and forecast one to fourquartersahead over the 24quarters outofsample horizon of 2001Q1 to 2006Q4. The forecast performances of the two largescale models are compared with each other, and also with an unrestricted threevariable Vector Autoregressive (VAR) and BVAR models, with identical hyperparameter values as the largescale BVARs. The results, based on the average Root Mean Squared Errors (RMSEs), indicate that the largescale models are bettersuited for forecasting the three macroeconomic variables of our choice, and amongst the two types of largescale models, the DFM holds the edge. 
Keywords:  Dynamic Factor Model, BVAR, Forecast Accuracy 
JEL:  C11 C13 C33 C53 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:pre:wpaper:200816&r=ecm 
By:  D.S.G. Pollock 
Abstract:  An account is given of various filtering procedures that have been implemented in a computer program, which can be used in analysing econometric time series. The program provides some new filtering procedures that operate primarily in the frequency domain. Their advantage is that they are able to achieve clear separations of components of the data that reside in adjacent frequency bands in a way that the conventional timedomain methods cannot. Several procedures that operate exclusively within the time domain have also been implemented in the program. Amongst these are the bandpass filters of Baxter and King and of Christiano and Fitzgerald, which have been used in estimating business cycles. The Henderson filter, the Butterworth filter and the Leser or Hodrick–Prescott filter are also implemented. These are also described in this paper. Econometric filtering procedures must be able to cope with the trends that are typical of economic time series. If a trended data sequence has been reduced to stationarity by differencing prior to its filtering, then the filtered sequence will need to be reinflated. This can be achieved within the time domain via the summation operator, which is the inverse of the difference operator. The effects of the differencing can also be reversed within the frequency domain by recourse to the frequencyresponse function of the summation operator. 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:08/21&r=ecm 
By:  K. Barhoumi; S. Benk; R. Cristadoro; A. Den Reijer; A. Jakaitiene; P. Jelonek; A. Rua; K. Ruth; C. Van Nieuwenhuyze; G. Rünstler (ECB, DG Research) 
Abstract:  This paper evaluates different models for the shortterm forecasting of real GDP growth in ten selected European countries and the euro area as a whole. Purely quarterly models are compared with models designed to exploit early releases of monthly indicators for the nowcast and forecast of quarterly GDP growth. Amongst the latter, we consider small bridge equations and forecast equations in which the bridging between monthly and quarterly data is achieved through a regression on factors extracted from large monthly datasets. The forecasting exercise is performed in a simulated realtime context, which takes account of publication lags in the individual series. In general, we find that models that exploit monthly information outperform models that use purely quarterly data and, amongst the former, factor models perform best. 
Keywords:  Bridge models, Dynamic factor models, realtime data flow 
JEL:  E37 C53 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:nbb:reswpp:20080617&r=ecm 
By:  Sonali Das (LQM, CSIR, Pretoria); Rangan Gupta (Department of Economics, University of Pretoria); Alain Kabundi (Department of Economics and Econometrics, University of Johannesburg) 
Abstract:  This paper uses the Dynamic Factor Model (DFM) framework, which accommodates a large crosssection of macroeconomic time series for forecasting regional house price inflation. As a case study, we use data on house price inflation for five metropolitan areas of South Africa. The DFM used in this study contains 282 quarterly series observed over the period 1980Q12006Q4. The results, based on the Mean Absolute Errors of one to fourquartersahead out of sample forecasts over the period of 2001Q1 to 2006Q4, indicate that, in majority of the cases, the DFM outperforms the VARs, both classical and Bayesian, with the latter incorporating both spatial and nonspatial models. Our results, thus, indicate the blessing of dimensionality. 
Keywords:  Dynamic Factor Model, VAR, BVAR, Forecast Accuracy 
JEL:  C11 C13 C33 C53 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:pre:wpaper:200814&r=ecm 
By:  James D. Hamilton 
Abstract:  Although ARCHrelated models have proven quite popular in finance, they are less frequently used in macroeconomic applications. In part this may be because macroeconomists are usually more concerned about characterizing the conditional mean rather than the conditional variance of a time series. This paper argues that even if one's interest is in the conditional mean, correctly modeling the conditional variance can still be quite important, for two reasons. First, OLS standard errors can be quite misleading, with a "spurious regression" possibility in which a true null hypothesis is asymptotically rejected with probability one. Second, the inference about the conditional mean can be inappropriately influenced by outliers and highvariance episodes if one has not incorporated the conditional variance directly into the estimation of the mean, and infinite relative efficiency gains may be possible. The practical relevance of these concerns is illustrated with two empirical examples from the macroeconomics literature, the first looking at market expectations of future changes in Federal Reserve policy, and the second looking at changes over time in the Fed's adherence to a Taylor Rule. 
JEL:  E52 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:14151&r=ecm 
By:  Monchuk, Daniel C.; Hayes, Dermot J.; Miranowski, John 
Abstract:  This study examines correlates with aggregate county income growth across the 48 contiguous states from 1990 to 2001. Since visual inspection of the variable to be explained shows a clear spatial relationship and to control for potentially endogenous variables, we estimate a twostage spatial error model. Given the lack of theoretical and asymptotic results for such models, we propose and implement a number of spatial bootstrap algorithms, including one allowing for heteroskedasticity, to infer parameter significance. Among the results of a comparison of the marginal effects in rural versus nonrural counties, we find that outdoor recreation and natural amenities favor positive growth in rural counties, densely populated rural areas enjoy stronger growth, and property taxes correlate negatively with rural growth. 
Keywords:  county income growth, rural development, spatial bootstrapping. 
Date:  2008–06–23 
URL:  http://d.repec.org/n?u=RePEc:isu:genres:12958&r=ecm 