
on Econometrics 
By:  Søren Johansen (Department of Economics, University of Copenhagen); Morten Ørregaard Nielsen (Cornell University) 
Abstract:  This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d – b; where d = b > 1/2 are parameters to be estimated. We model the data X?, …, X? given the initial values Xºn, n = 0, 1, …, under the assumption that the errors are i.i.d. Gaussian. We consider the likelihood and its derivatives as stochastic processes in the parameters, and prove that they converge in distribution when the errors are i.i.d. with suitable moment conditions and the initial values are bounded. We use this to prove existence and consistency of the local likelihood estimator, and to ?find the asymptotic distribution of the estimators and the likelihood ratio test of the associated fractional unit root hypothesis, which contains the fractional Brownian motion of type II. 
Keywords:  DickeyFuller test; fractional unit root; likelihood inference 
JEL:  C22 
Date:  2007–08 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:0727&r=ecm 
By:  Antonio Diez de los Rios; Enrique Sentana 
Abstract:  Nowadays researchers can choose the sampling frequency of exchange rates and interest rates. If the number of observations per contract period is large relative to the sample size, standard GMM asymptotic theory provides unreliable inferences in UIP regression tests. We specify a bivariate continuoustime model for exchange rates and forward premia robust to temporal aggregation, unlike the discrete time models in the literature. We obtain the UIP restrictions on the continuoustime model parameters, which we estimate efficiently, and propose a novel specification test that compares estimators at different frequencies. Our empirical results based on correctly specified models reject UIP. 
Keywords:  Exchange rates; Econometric and statistical methods 
JEL:  F31 G15 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:0753&r=ecm 
By:  David F. Hendry (Oxford University); Søren Johansen (Department of Economics, University of Copenhagen); Carlos Santos (Portuguese Catholic University) 
Abstract:  We consider selecting a regression model, using a variant of Gets, when there are more variables than observations, in the special case that the variables are impulse dummies (indicators) for every observation. We show that the setting is unproblematic if tackled appropriately, and obtain the finitesample distribution of estimators of the mean and variance in a simple locationscale model under the null that no impulses matter. A Monte Carlo simulation confirms the null distribution, and shows power against an alternative of interest. 
Keywords:  indicators; regression saturation; subset selection; model selection 
JEL:  C51 C22 
Date:  2007–08 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:0726&r=ecm 
By:  Gregory Connor; Oliver Linton; Matthias Hagmann 
Abstract:  This paper develops a new estimation procedure for characteristicbased factor models of security returns. We treat the factor model as a weighted additive nonparametric regression model, with the factor returns serving as timevarying weights, and a set of univariate nonparametric functions relating security characteristic to the associated factor betas. We use a timeseries and crosssectional pooled weighted additive nonparametric regression methodology to simultaneously estimate the factor returns and characteristicbeta functions. By avoiding the curse of dimensionality our methodology allows for a larger number of factors than existing semiparametric methods. We apply the technique to the threefactor FamaFrench model, Carhart’s fourfactor extension of it adding a momentum factor, and a fivefactor extension adding an ownvolatility factor. We .nd that momentum and ownvolatility factors are at least as important if not more important than size and value in explaining equity return comovements. We test the multifactor beta pricing theory against the Capital Asset Pricing model using a standard test, and against a general alternative using a new nonparametric test.Keywords: Additive Models; Arbitrage pricing theory; Factor model; FamaFrench; Kernel estimation; Nonparametric regression; Panel data.JEL codes: G12, C14. 
Date:  2007–09 
URL:  http://d.repec.org/n?u=RePEc:fmg:fmgdps:dp599&r=ecm 
By:  Kalogeropoulos, Konstantinos; Roberts, Gareth O.; Dellaportas, Petros 
Abstract:  We address the problem of parameter estimation for diffusion driven stochastic volatility models through Markov chain Monte Carlo (MCMC). To avoid degeneracy issues we introduce an innovative reparametrisation defined through transformations that operate on the time scale of the diffusion. A novel MCMC scheme which overcomes the inherent difficulties of time change transformations is also presented. The algorithm is fast to implement and applies to models with stochastic volatility. The methodology is tested through simulation based experiments and illustrated on data consisting of US treasury bill rates. 
Keywords:  Imputation; Markov chain Monte Carlo; Stochastic volatility 
JEL:  C13 G12 C15 C11 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:5697&r=ecm 
By:  Kalogeropoulos, Konstantinos; Dellaportas, Petros; Roberts, Gareth O. 
Abstract:  We address the problem of likelihood based inference for correlated diffusion processes using Markov chain Monte Carlo (MCMC) techniques. Such a task presents two interesting problems. First, the construction of the MCMC scheme should ensure that the correlation coefficients are updated subject to the positive definite constraints of the diffusion matrix. Second, a diffusion may only be observed at a finite set of points and the marginal likelihood for the parameters based on these observations is generally not available. We overcome the first issue by using the Cholesky factorisation on the diffusion matrix. To deal with the likelihood unavailability, we generalise the data augmentation framework of Roberts and Stramer (2001 Biometrika 88(3):603621) to ddimensional correlated diffusions including multivariate stochastic volatility models. Our methodology is illustrated through simulation based experiments and with daily EUR /USD, GBP/USD rates together with their implied volatilities. 
Keywords:  Markov chain Monte Carlo; Multivariate stochastic volatility; Multivariate CIR model; Cholesky Factorisation. 
JEL:  C13 G12 C15 C11 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:5696&r=ecm 
By:  Søren Johansen (Department of Economics, University of Copenhagen) 
Abstract:  Yule (1926) introduced the concept of spurious or nonsense correlation, and showed by simulation that for some nonstationary processes, that the empirical correlations seem not to converge in probability even if the processes were independent. This was later discussed by Granger and Newbold (1974), and Phillips (1986) found the limit distributions. We propose to distinguish between empirical and population correlation coefficients and show in a bivariate autoregressive model for nonstationary variables that the empirical correlation and regression coefficients do not converge to the relevant population values, due to the trending nature of the data. We conclude by giving a simple cointegration analysis of two interests. The analysis illustrates that much more insight can be gained about the dynamic behavior of the nonstationary variables then simply by calculating a correlation coefficient. 
JEL:  C22 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:0725&r=ecm 
By:  Casey Quinn 
Abstract:  This paper considers the simultaneous explanation of mortality risk, health and lifestyles, using a reducedform system of equations in which the multivariate distribution is defined by the copula. A copula approximation of the joint distribution allows one to avoid usually implicit distributional assumptions, allowing potentially more robust and efficient estimates to be retrieved. By applying the theory of inference functions the parameters of each lifestyle, health and mortality equation can be estimated separately to the parameters of association found in their joint distribution, simplifying analysis considerably. The use of copulas also enables estimation of skewed multivariate distributions for the latent variables in a multivariate model of discrete response variables. This flexibility provides more precise estimates with more appropriate distributional assumptions, but presents explicit tradeoffs during analysis. Information that can be retrieved concerning distributional assumptions, skewness and tail dependence require prioritisation such that different needs could generate a different ’best’ model even for the same data. 
Keywords:  health, lifestyle, mortality multivariate models, copulas, inference functions. 
JEL:  C1 C3 I1 
Date:  2006–07 
URL:  http://d.repec.org/n?u=RePEc:yor:hectdg:06/05&r=ecm 
By:  Jan Jacobs; Pieter Otter; Ard den Reijer 
Abstract:  This paper employs concepts from information theory to choosing the dimension of a data set. We calculate relative measures of information in the data in terms of eigenvalues and derive criteria to determine the `optimal' size of the data set, in particular whether an extra variable adds information. The methods can be used as a first step in the construction of a dynamic factor model or a leading index, as illustrated with a macroeconomic data set on The Netherlands. 
Keywords:  information; data set dimension; dynamic factor models; leading index. 
JEL:  C32 C52 C82 
Date:  2007–10 
URL:  http://d.repec.org/n?u=RePEc:dnb:dnbwpp:150&r=ecm 
By:  Andrew M. Jones 
Abstract:  Much of the empirical analysis done by health economists seeks to estimate the impact of specific health policies and the greatest challenge for successful applied work is to find appropriate sources of variation to identify the treatment effects of interest. Estimation can be prone to selection bias, when the assignment to treatments is associated with the potential outcomes of the treatment. Overcoming this bias requires variation in the assignment of treatments that is independent of the outcomes. One source of independent variation comes from randomised controlled experiments. But, in practice, most economic studies have to draw on nonexperimental data. Many studies seek to use variation across time and events that takes the form of a quasiexperimental design, or “natural experiment”, that mimics the features of a genuine experiment. This chapter reviews the data and methods that are used in applied health economics with a particular emphasis on the use of panel data. The focus is on nonlinear models and methods that can accommodate unobserved heterogeneity. These include conditional estimators, maximum simulated likelihood, Bayesian MCMC, finite mixtures and copulas. 
Date:  2007–08 
URL:  http://d.repec.org/n?u=RePEc:yor:hectdg:07/18&r=ecm 
By:  Søren Johansen (Department of Economics, University of Copenhagen) 
Abstract:  An analysis of some identification problems in the cointegrated VAR is given. We give a new criteria for identification by linear restrictions on individual relations which is equivalent to the rank condition. We compare the asymptotic distribution of the estimators of a and ß; when they are identified by linear restrictions on ß; and when they are identified by linear restrictions on a; in which case a component of ß^ is asymptotically Gaussian. Finally we discuss identification of shocks by introducing the contemporaneous and permanent effect of a shock and the distinction between permanent and transitory shocks, which allows one to identify permanent shocks from the longrun variance and transitory shocks from the shortrun variance. 
Keywords:  identification; cointegration; common trends 
JEL:  C32 
Date:  2007–10 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:0724&r=ecm 
By:  Samuel Zita (Department of Economics, University of Pretoria); Rangan Gupta (Department of Economics, University of Pretoria) 
Abstract:  The paper uses Gibbs sampling technique to estimate a heteroscedastic Bayesian Vector Error Correction Model (BVECM) of the South African economy for the period 1970:12000:4, and then forecast GDP, consumption, investment, short and long term interest rates, and the CPI over the period of 2001:1 to 2005:4. We find that a tight prior produces relatively more accurate forecasts than a loose one. The outofsampleforecast accuracy resulting from the Gibbs sampled BVECM is compared with those generated from a Classical VECM and a homoscedastic BVECM. The homoscedastic BVECM is found to produce the most accurate out of sample forecasts. 
Keywords:  Forecast Accuracy, MeticalRand Exchange Rate, Random Walk, StickyPrice Model, VAR Forecasts, VECM Forecasts 
JEL:  B23 C22 F31 E17 E27 E37 E47 
Date:  2007–02 
URL:  http://d.repec.org/n?u=RePEc:pre:wpaper:200702&r=ecm 
By:  Pötscher, Benedikt M. 
Abstract:  Confidence sets based on sparse estimators are shown to be large compared to more standard confidence sets, demonstrating that sparsity of an estimator comes at a substantial price in terms of the quality of the estimator. The results are set in a general parametric or semiparametric framework. 
Keywords:  sparse estimator; consistent model selection; postmodelselection estimator; penalized maximum likelihood; confidence set; coverage probability 
JEL:  C44 C1 
Date:  2007–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:5677&r=ecm 
By:  BONTEMPS, Christian; MEDDAHI, Nour 
JEL:  C12 C15 
Date:  2007–10 
URL:  http://d.repec.org/n?u=RePEc:ide:wpaper:5705&r=ecm 
By:  Nikias Sarafoglou; Jean H.P. Paelinck 
Abstract:  Spatial econometrics is a fastgrowing field in the series of quantitative disciplines, auxiliaries of economics and related social sciences. Space, friction, interdependence, spatiotemporal components, externalities and many other aspects interact and should be treated adequately in this field. The publication of the Paelinck and Klaassen book in the late 1970s generated virtually the field spatial econometrics This article studies the diffusion of spatial econometrics, through experienced history on the one hand, on the other through bibliometric methods. Although this field was an “Invisible College” up to 2006 (absence of any organization in form of association, conference, journal, etc.), the databases depict a fast diffusion in the past and strong prospects for the future. 
JEL:  B2 B4 C4 C5 R1 
Date:  2007–09 
URL:  http://d.repec.org/n?u=RePEc:usi:wpaper:514&r=ecm 
By:  Özer Karagedikli; Troy Matheson; Christie Smith; Shaun P. Vahey (Reserve Bank of New Zealand) 
Abstract:  Real Business Cycle (RBC) and Dynamic Stochastic General Equilibrium (DSGE) methods have become essential components of the macroeconomist’s toolkit. This literature review stresses recently developed (often Bayesian) techniques for computation and inference, providing a supplement to the Romer (2006) textbook treatment which stresses theoretical issues. Many computational aspects are illustrated with reference to the simple divisible labour RBC model familiar to graduate students from King, Plosser and Rebelo (1988), Christiano and Eichenbaum (1992), Campbell (1994) and Romer (2006). Code and US data to replicate the computations are provided on the Internet, together with a number of appendices providing background details. 
JEL:  C11 C22 E17 E32 E52 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:nzb:nzbdps:2007/15&r=ecm 
By:  Casey Quinn 
Abstract:  A copula is best described, as in Joe (1997), as a multivariate distribution function that is used to bind each marginal distribution function to form the joint. The copula parameterises the dependence between the margins, while the parameters of each marginal distribution function can be estimated separately. This is a brief introduction to copulas and multivariate dependence issues within a health economics context. The research presented here will make its own contributions to the development of copulas as a methodology, but more importantly will make deliberate inroads into health economic applications of copulas. To do this, common analytic problems faced by health economists are considered. Some of the differences between the copula methodology and existing alternatives are discussed, and a generalisable, systematic approach to estimation is provided. 
Keywords:  Keywords: 
JEL:  C1 C3 C5 I3 I10 
Date:  2007–10 
URL:  http://d.repec.org/n?u=RePEc:yor:hectdg:07/22&r=ecm 