
on Econometrics 
By:  Pötscher, Benedikt 
Abstract:  The finitesample as well as the asymptotic distribution of Leung and Barron's (2006) model averaging estimator are derived in the context of a linear regression model. An impossibility result regarding the estimation of the finitesample distribution of the model averaging estimator is obtained. 
Keywords:  Model mixing; model aggregation; combination of estimators; model selection; finite sample distribution; asymptotic distribution; estimation of distribution 
JEL:  C51 C20 C13 C52 C12 
Date:  2006–03 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:73&r=ecm 
By:  Frank Windmeijer 
Abstract:  This chapter gives an account of the recent literature on estimating models for panel count data. Specifically, the treatment of unobserved individual heterogeneity that is correlated with the explanatory variables and the presence of explanatory variables that are not strictly exogenous are central. Moment conditions are discussed for these type of problems that enable estimation of the parameters by GMM. As standard Wald tests based on efficient twostep GMM estimation results are known to have poor finite sample behaviour, alternative test procedures that have recently been proposed in the literature are evaluated by means of a Monte Carlo study. 
Keywords:  GMM, Exponential Models, Hypothesis Testing 
JEL:  C12 C13 C23 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:bri:uobdis:06/591&r=ecm 
By:  Lucia Alessi; Matteo Barigozzi; Marco Capasso 
Abstract:  We propose a new method for multivariate forecasting which combines the Generalized Dynamic Factor Model (GDFM) and the multivariate Generalized Autoregressive Conditionally Heteroskedastic (GARCH) model. We assume that the dynamic common factors are conditionally heteroskedastic. The GDFM, applied to a large number of series, captures the multivariate information and disentangles the common and the idiosyncratic part of each series; it also provides a first identification and estimation of the dynamic factors governing the data set. A timevarying correlation GARCH model applied on the estimated dynamic factors finds the parameters governing their covariances’ evolution. Then a modified version of the Kalman filter gets a more precise estimation of the static and dynamic factors’ insample levels and covariances. A method is suggested for predicting conditional outofsample variances and covariances of the original data series. Finally, we carry out an empirical application aiming at comparing volatility forecasting results of our Dynamic Factor GARCH model against the univariate GARCH. 
Keywords:  Dynamic Factors, Multivariate GARCH, Covolatility Forecasting 
Date:  2006–10–02 
URL:  http://d.repec.org/n?u=RePEc:ssa:lemwps:2006/25&r=ecm 
By:  Jun Yu (School of Economics and Social Sciences, Singapore Management University); Renate Meyer (University of Auckland) 
Abstract:  In this paper we show that fully likelihoodbased estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications which are natural extensions to certain existing models, one of which allows for time varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time varying correlations, heavytailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the most adequate specifications are those that allow for time varying correlation coefficients. 
Keywords:  Multivariate stochastic volatility; Granger causality in volatility; Heavytailed distributions; Time varying correlations; Factors; MCMC; DIC. 
JEL:  C11 C15 C30 G12 
Date:  2004–11 
URL:  http://d.repec.org/n?u=RePEc:siu:wpaper:232004&r=ecm 
By:  Jan F. Kiviet (Universiteit van Amsterdam); Jerzy Niemczyk (Universiteit van Amsterdam) 
Abstract:  In practice structural equations are often estimated by leastsquares, thus neglecting any simultaneity. This paper reveals why this may often be justifiable and when. Assuming data stationarity and existence of the first four moments of the disturbances we find the limiting distribution of the ordinary leastsquares (OLS) estimator in a linear simultaneous equations model. In simple static and dynamic models we compare the asymptotic efficiency of this inconsistent estimator with that of consistent simple instrumental variable (IV) estimators and depict cases where  due to relative weakness of the instruments or mildness of the simultaneity  the inconsistent estimator is more precise. In addition, we examine by simulation to what extent these firstorder asymptotic findings are reflected in finite sample, taking into account nonexistence of moments of the IV estimator. By dynamic visualization techniques we enable to appreciate any differences in efficiency over a parameter space of a much higher dimension than just two, viz. in colored animated image sequences (which are not very effective in print, but much more so in liveonscreen projection). 
Keywords:  efficiency of an inconsistent estimator; invalid instruments; simultaneity bias; weak instruments; 4D diagrams 
JEL:  C13 C15 C30 
Date:  2006–09–18 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20060078&r=ecm 
By:  Richard G. Anderson; Hailong Qian; Robert H. Rasche 
Abstract:  In this paper, we examine the use of BoxTiao*s (1977) canonical correlation method as an alternative to likelihoodbased inferences for vector errorcorrection models. It is now wellknown that testing of cointegration ranks based on Johansen*s (1995) MLbased method suffers from severe small sample size distortions. Furthermore, the distributions of empirical economic and financial time series tend to display fat tails, heteroskedasticity and skewness that are inconsistent with the usual distributional assumptions of likelihoodbased approach. The testing statistic based on BoxTiao*s canonical correlations shows promise as an alternative to Johansen*s MLbased approach for testing of cointegration rank in VECM models. 
Keywords:  Econometric models ; Panel analysis 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2006050&r=ecm 
By:  Fabio C. Bagliano; Claudio Morana 
Abstract:  In this paper a new approach to factor vector autoregressive estimation, based on Stock and Watson (2005), is introduced. Relative to the StockWatson approach, the proposed method has the advantage of allowing for a more clearcut interpretation of the global factors, as well as for the identi.cation of all idiosyncratic shocks. Moreover, it shares with the StockWatson approach the advantage of using an iterated procedure in estimation, recovering, asymptotically, full effciency, and also allowing the imposition of appropriate restrictions concerning the lack of Granger causality of the variables versus the factors. Finally, relative to other available methods, our modelling approach has the advantage of allowing for the joint modelling of all variables, without resorting to longrun forcing hypotheses. An application to largescale macroeconometric modelling is also provided. 
Keywords:  dynamic factor models, vector autoregressions, principal components analysis. 
JEL:  C32 G1 G15 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:cca:wpaper:28&r=ecm 
By:  A. Colin Cameron; Jonah B. Gelbach; Douglas L. Miller 
Abstract:  In this paper we propose a new variance estimator for OLS as well as for nonlinear estimators such as logit, probit and GMM, that provcides clusterrobust inference when there is twoway or multiway clustering that is nonnested. The variance estimator extends the standard clusterrobust variance estimator or sandwich estimator for oneway clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our method is easily implemented in statistical packages, such as Stata and SAS, that already offer clusterrobust standard errors when there is oneway clustering. The method is demonstrated by a Monte Carlo analysis for a twoway random effects model; a Monte Carlo analysis of a placebo law that extends the stateyear effects example of Bertrand et al. (2004) to two dimensions; and by application to two studies in the empirical public/labor literature where twoway clustering is present. 
JEL:  C12 C21 C23 
Date:  2006–09 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberte:0327&r=ecm 
By:  Clive Bowsher (Nuffield College, University of Oxford); Roland Meeks (Nuffield College, University of Oxford) 
Abstract:  Functional Signal plus Noise (FSN) models are proposed for analysing the dynamics of a large crosssection of yields or asset prices in which contemporaneous observations are functionally related. The FSN models are used to forecast high dimensional yield curves for US Treasury bonds at the one month ahead horizon. The models achieve large reductions in mean square forecast errors relative to a random walk for yields and readily dominate both the Diebold and Li (2006) and random walk forecasts across all maturities studied. We show that the Expectations Theory (ET) of the term structure completely determines the conditional mean of any zerocoupon yield curve. This enables a novel evaluation of the ET in which its 1step ahead forecasts are compared with those of rival methods such as the FSN models, with the results strongly supporting the growing body of empirical evidence against the ET. Yield spreads do provide important information for forecasting the yield curve, especially in the case of shorter maturities, but not in the manner prescribed by the Expectations Theory. 
Keywords:  Yield curve, term structure, expectations theory, FSN models, functional time series, forecasting, state space form, cubic spline. 
JEL:  C33 C51 C53 E47 G12 
Date:  2006–10–02 
URL:  http://d.repec.org/n?u=RePEc:nuf:econwp:0612&r=ecm 
By:  Jason Allen (Bank of Canada; Queen's University) 
Abstract:  The purpose of this paper is to investigate, using Monte Carlo methods, whether or not Hall’s (2000) centered test of overidentifying restrictions for parameters estimated by Generalized Method of Moments (GMM) is more powerful, once the test is sizeadjusted, than the standard test introduced by Hansen (1982). The Monte Carlo evidence shows that very little sizeadjusted power is gained over the standard uncentered calculation. calculation. Empirical examples using Epstein and Zin (1991) preferences demonstrates that the centered and uncentered tests sometimes lead to different conclusions about model specification. 
Keywords:  Size, Power, GMM, Overidentifying restrictions 
JEL:  C15 C52 G12 
Date:  2005–08 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1091&r=ecm 
By:  Patrick Marsh 
Abstract:  This paper generalizes the goodness of fit tests of Claeskens and Hjort (2004) and Marsh (2006) to the case where the hypothesis specifies only family of distributions. Data driven versions of these tests are based upon the Akaike and Bayesian selection criteria. The asymptotic distributions of these tests are shown to be standard, unlike those based upon the empirical distribution function. Moreover, numerical evidence suggests that under the null hypothesis performance is very similar to tests such as the KolmogorovSmirnov or AndersonDarling. However, in terms of power under the alternative, the proposed tests seem to have a consistent and significant advantage. 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:yor:yorken:06/20&r=ecm 
By:  B. da Silva Lopes, Artur C. 
Abstract:  In this paper it is demonstrated by simulation that, contrary to a widely held belief, pure seasonal mean shifts  i.e., seasonal structural breaks which affect only the deterministic seasonal cycle  really do matter for DickeyFuller longrun unit root tests. 
Keywords:  unit roots; seasonality; DickeyFuller tests; structural breaks 
JEL:  C5 C22 
Date:  2005–10–15 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:125&r=ecm 
By:  Soiliou Namoro; WayneRoy Gayle 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:pit:wpaper:251&r=ecm 
By:  William T. Gavin; Kevin L. Kliesen 
Abstract:  Decision makers, both public and private, use forecasts of economic growth and inflation to make plans and implement policies. In many situations, reasonably good forecasts can be made with simple rules of thumb that are extrapolations of a single data series. In principle, information about other economic indicators should be useful in forecasting a particular series like inflation or output. Including too many variables makes a model unwieldy and not including enough can increase forecast error. A key problem is deciding which other series to include. Recently, studies have shown that Dynamic Factor Models (DFMs) may provide a general solution to this problem. The key is that these models use a large data set to extract a few common factors (thus, the term #datarich*). This paper uses a monthly DFM model to forecast inflation and output growth at horizons of 3, 12 and 24 months ahead. These forecasts are then compared to simple forecasting rules. 
Keywords:  Inflation (Finance) ; Forecasting 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2006054&r=ecm 
By:  Woodcock, Simon; Benedetto, Gary 
Abstract:  One approach to limiting disclosure risk in publicuse microdata is to release multiplyimputed, partially synthetic data sets. These are data on actual respondents, but with confidential data replaced by multiplyimputed synthetic values. A misspecified imputation model can invalidate inferences because the distribution of synthetic data is completely determined by the model used to generate them. We present two practical methods of generating synthetic values when the imputer has only limited information about the true data generating process. One is applicable when the true likelihood is known up to a monotone transformation. The second requires only limited knowledge of the true likelihood, but nevertheless preserves the conditional distribution of the confidential data, up to sampling error, on arbitrary subdomains. Our method maximizes data utility and minimizes incremental disclosure risk up to posterior uncertainty in the imputation model and sampling error in the estimated transformation. We validate the approach with a simulation and application to a large linked employeremployee database. 
Keywords:  statistical disclosure limitation; confidentiality; privacy; multiple imputation; partially synthetic data 
JEL:  C4 C81 
Date:  2006–09 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:155&r=ecm 
By:  Mehmet Caner 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:pit:wpaper:212&r=ecm 
By:  Vargas, Gregorio A. 
Abstract:  The Block DCC model for determining dynamic correlations within and between groups of financial asset returns is extended to account for asymmetric effects. Simulation results show that the Asymmetric Block DCC model is competitive in insample forecasting and performs better than alternative DCC models in outofsample forecasting of conditional correlation in the presence of asymmetric effect between blocks of asset returns. Empirical results demonstrate that the model is able to capture the asymmetries in conditional correlations of some blocks of currencies in East Asia in the turbulent years of the late 1990s. 
Keywords:  asymmetric effect; block dynamic conditional correlation; multivariate GARCH 
JEL:  C32 G10 C5 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:189&r=ecm 
By:  Mehmet Caner 
Date:  2005–01 
URL:  http://d.repec.org/n?u=RePEc:pit:wpaper:211&r=ecm 
By:  Patrick Marsh 
Abstract:  Via the leading unit root case, the problem of testing on a lagged dependent variable is characterized by a nuisance parameter which is present only under the alternative (see Andrews and Ploberger (1994)). This has proven a barrier to the construction of optimal tests. Moreover, in their absence it is impossible to objectively assess the absolute power properties of existing tests. Indeed, feasible tests based upon the optimality criteria used here are found to have numerically superior power properties to both the original Dickey and Fuller (1981) statistics and the efficient detrended versions suggested by Elliott, Rothenberg and Stock (1996) and analysed in Burridge and Taylor (2000). 
Keywords:  Nuisance parameter, invariant test, unit root 
Date:  2006–10 
URL:  http://d.repec.org/n?u=RePEc:yor:yorken:06/19&r=ecm 
By:  Mehmet Caner 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:pit:wpaper:210&r=ecm 
By:  Soiliou Namoro; Martin Burda; WayneRoy Gayle 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:pit:wpaper:252&r=ecm 
By:  Guglielmo Maria Caporale; Christoph Hanck 
Abstract:  We analyse whether tests of PPP exhibit erratic behaviour (as previously reported by Caporale et al., 2003) even when (possibly unwarranted) homogeneity and proportionality restrictions are not imposed, and trivariate cointegration (stagethree) tests between the nominal exchange rate, domestic and foreign price levels are carried out (instead of stationarity tests on the real exchange rate, as in stagetwo tests). We examine the US dollar real exchange rate visàvis 21 other currencies over a period of more than a century, and find that stagethree tests produce similar results to those for stagetwo tests, namely the former also behave erratically. This confirms that neither of these traditional approaches to testing for PPP can solve the issue of PPP. 
Keywords:  Purchasing Power Parity (PPP), real exchange rate, cointegration, stationarity, parameter instability 
JEL:  C12 C22 F31 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:ces:ceswps:_1811&r=ecm 
By:  Stan du Plessis (Department of Economics, University of Stellenbosch) 
Abstract:  This paper argues that the sometimesconflicting results of a modern revisionist literature on data mining in econometrics reflect different approaches to solving the central problem of model uncertainty in a science of nonexperimental data. The literature has entered an exciting phase with theoretical development, methodological reflection, considerable technological strides on the computing front and interesting empirical applications providing momentum for this branch of econometrics. The organising principle for this discussion of data mining is a philosophical spectrum that sorts the various econometric traditions according to their epistemological assumptions (about the underlying datageneratingprocess DGP) starting with nihilism at one end and reaching claims of encompassing the DGP at the other end; call it the DGPspectrum. In the course of exploring this spectrum the reader will encounter various Bayesian, specifictogeneral (SG) as well generaltospecific (GS) methods. To set the stage for this exploration the paper starts with a description of data mining, its potential risks and a short section on potential institutional safeguards to these problems. 
Keywords:  Data mining, model selection, automated model selection, general to specific modelling, extreme bounds analysis, Bayesian model selection 
JEL:  C11 C50 C51 C52 C87 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:sza:wpaper:wpapers29&r=ecm 
By:  Mehmet Caner 
Date:  2005–01 
URL:  http://d.repec.org/n?u=RePEc:pit:wpaper:208&r=ecm 
By:  Mehmet Caner 
Date:  2005–01 
URL:  http://d.repec.org/n?u=RePEc:pit:wpaper:209&r=ecm 
By:  Patrick J. Kehoe 
Abstract:  The common approach to evaluating a model in the structural VAR literature is to compare the impulse responses from structural VARs run on the data to the theoretical impulse responses from the model. The SimsCogleyNason approach instead compares the structural VARs run on the data to identical structural VARs run on data from the model of the same length as the actual data. Chari, Kehoe, and McGrattan (2006) argue that the inappropriate comparison made by the common approach is the root of the problems in the SVAR literature. In practice, the problems can be solved simply. Switching from the common approach to the SimsCogleyNason approach basically involves changing a few lines of computer code and a few lines of text. This switch will vastly increase the value of the structural VAR literature for economic theory. 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:fip:fedmsr:379&r=ecm 
By:  Mehmet Caner; Dan Berkowitz; Ying Fang 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:pit:wpaper:207&r=ecm 