
on Econometric Time Series 
By:  Attfield, Clifford; Temple, Jonathan 
Abstract:  Standard macroeconomic models suggest that the ‘great ratios’ of consumption to output and investment to output should be stationary. The joint behaviour of consumption, investment and output can then be used to measure trend output. We adopt this approach for the USA and UK, and find support for stationarity of the great ratios when structural breaks are taken into account. From the estimated vector error correction models, we extract multivariate estimates of the permanent component in output, and comment on trend growth in the 1980s and the New Economy boom of the 1990s. 
Keywords:  great ratios; New Economy; permanent components; structural breaks; trend output 
JEL:  C32 C51 E20 E30 
Date:  2004–12 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:4796&r=ets 
By:  Gillman, Max; Nakov, Anton 
Abstract:  The Paper presents a model in which the exogenous money supply causes changes in the inflation rate and the output growth rate. While inflation and growth rate changes occur simultaneously, the inflation acts as a tax on the return to human capital and in this sense induces the growth rate decrease. Shifts in the model’s credit sector productivity cause shifts in the income velocity of money that can break the otherwise stable relation between money, inflation, and output growth. Applied to two accession countries, Hungary and Poland, a VAR system is estimated for each that incorporates endogenously determined multiple structural breaks. Results indicate Granger causality positively from money to inflation and negatively from inflation to growth for both Hungary and Poland, as suggested by the model, although there is some feedback to money for Poland. Three structural breaks are found for each country that are linked to changes in velocity trends, and to the breaks found in the other country. 
Keywords:  Granger causality; growth; inflation; structural breaks; transition; VAR; velocity 
JEL:  C22 E31 O42 
Date:  2005–01 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:4845&r=ets 
By:  Del Negro, Marco; Schorfheide, Frank; Smets, Frank; Wouters, Rafael 
Abstract:  The Paper provides new tools for the evaluation of DSGE models, and applies it to a largescale New Keynesian dynamic stochastic general equilibrium (DSGE) model with price and wage stickiness and capital accumulation. Specifically, we approximate the DSGE model by a vector autoregression (VAR), and then systematically relax the implied crossequation restrictions. Let delta denote the extent to which the restrictions are being relaxed. We document how the in and outofsample fit of the resulting specification (DSGEVAR) changes as a function of delta. Furthermore, we learn about the precise nature of the misspecification by comparing the DSGE model’s impulse responses to structural shocks with those of the bestfitting DSGEVAR. We find that the degree of misspecification in largescale DSGE models is no longer so large to prevent their use in daytoday policy analysis, yet it is not small enough that it cannot be ignored. 
Keywords:  Bayesian Analysis; DSGE models; model evaluation; vector autoregression 
JEL:  C11 C32 C53 
Date:  2005–01 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:4848&r=ets 
By:  van Tol, Michel R; Wolff, Christian C 
Abstract:  In this paper we develop a multivariate threshold vector error correction model of spot and forward exchange rates that allows for different forms of equilibrium reversion in each of the cointegrating residual series. By introducing the notion of an indicator matrix to differentiate between the various regimes in the set of nonlinear processes we provide a convenient framework for estimation by OLS. Empirically, outof sample forecasting exercises demonstrate its superiority over a linear VECM, while being unable to outpredict a (driftless) random walk model. As such we provide empirical evidence against the findings of Clarida and Taylor (1997). 
Keywords:  foreign exchange; multivariate threshold cointegration; TAR models 
JEL:  C51 C53 F31 
Date:  2005–03 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:4958&r=ets 
By:  Bams, Dennis; Lehnert, Thorsten; Wolff, Christian C 
Abstract:  In this paper, we investigate the importance of different loss functions when estimating and evaluating option pricing models. Our analysis shows that it is important to take into account parameter uncertainty, since this leads to uncertainty in the predicted option price. We illustrate the effect on the outofsample pricing errors in an application of the ad hoc BlackScholes model to DAX index options. Our empirical results suggest that different loss functions lead to uncertainty about the pricing error itself. At the same time, it provides a first yardstick to evaluate the adequacy of the loss function. This is accomplished through a datadriven method to deliver not just a point estimate of the pricing error, but a confidence interval. 
Keywords:  estimation risk; GARCH; implied volatility; loss functions; option pricing 
JEL:  G12 
Date:  2005–03 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:4960&r=ets 
By:  Marcellino, Massimiliano; Stock, James H; Watson, Mark W 
Abstract:  ‘Iterated’ multiperiod ahead time series forecasts are made using a oneperiod ahead model, iterated forward for the desired number of periods, whereas ‘direct’ forecasts are made using a horizonspecific estimated model, where the dependent variable is the multiperiod ahead value being forecasted. Which approach is better is an empirical matter: in theory, iterated forecasts are more efficient if correctly specified, but direct forecasts are more robust to model misspecification. This paper compares empirical iterated and direct forecasts from linear univariate and bivariate models by applying simulated outofsample methods to 171 US monthly macroeconomic time series spanning 19592002. The iterated forecasts typically outperform the direct forecasts, particularly if the models can select long lag specifications. The relative performance of the iterated forecasts improves with the forecast horizon. 
Keywords:  forecast comparisons; multistep forecasts; VAR forecasts 
JEL:  C32 E37 E47 
Date:  2005–03 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:4976&r=ets 
By:  Amstad, Marlene; Fischer, Andreas M 
Abstract:  A new procedure for shock identification of macroeconomic forecasts based on factor analysis is proposed. The identification scheme relies on daily panels and on the recognition that macroeconomic releases exhibit a high level of clustering. A large number of data releases on a single day is of considerable practical interest not only for the estimation but also for the identification of the factor model. The clustering of crosssectional information facilitates the interpretation of the forecast innovations as real or as nominal shocks. An empirical application is provided for Swiss inflation. We show that the monetary policy shocks generate an asymmetric response to inflation, that the passthrough for CPI inflation is weak, and that the information shocks to inflation are not synchronized. 
Keywords:  common factors; daily panels; inflation forecasting 
JEL:  E52 E58 
Date:  2005–04 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5008&r=ets 
By:  Reis, Ricardo A.M.R. 
Abstract:  While this is typically ignored, the properties of the stochastic process followed by aggregate consumption affect the estimates of the costs of fluctuations. This paper pursues two approaches to modelling aggregate consumption dynamics and to measuring how much society dislikes fluctuations, one statistical and one economic. The statistical approach estimates the properties of consumption and calculates the cost of having consumption fluctuating around its mean growth. The paper finds that the persistence of consumption is a crucial determinant of these costs and that the high persistence in the data severely distorts conventional measures. It shows how to compute valid estimates and confidence intervals. The economic approach uses a calibrated model of optimal consumption and measures the costs of eliminating income shocks. This uncovers a further cost of uncertainty, through its impact on precautionary savings and investment. The two approaches lead to costs of fluctuations that are higher than the common wisdom, between 0.5% and 5% of per capita consumption. 
Keywords:  consumption persistence; costs of fluctuations; models of aggregate consumption 
JEL:  E21 E32 E60 
Date:  2005–05 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5054&r=ets 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); Yixiao Sun (Dept. Economics, UCLA, San Diego); Sainan Jin (Guanghua School of Management, Peking University) 
Abstract:  Employing power kernels suggested in earlier work by the authors (2003), this paper shows how to re.ne methods of robust inference on the mean in a time series that rely on families of untruncated kernel estimates of the longrun parameters. The new methods improve the size properties of heteroskedastic and autocorrelation robust (HAR) tests in comparison with conventional methods that employ consistent HAC estimates, and they raise test power in comparison with other tests that are based on untruncated kernel estimates. Large power parameter (rho) asymptotic expansions of the nonstandard limit theory are developed in terms of the usual limiting chisquared distribution, and corresponding large sample size and large rho asymptotic expansions of the finite sample distribution of Wald tests are developed to justify the new approach. Exact finite sample distributions are given using operational techniques. The paper further shows that the optimal rho that minimizes a weighted sum of type I and II errors has an expansion rate of at most O(T^{1/2}) and can even be O(1) for certain loss functions, and is therefore slower than the O(T^{2/3}) rate which minimizes the asymptotic mean squared error of the corresponding long run variance estimator. A new plugin procedure for implementing the optimal rho is suggested. Simulations show that the new plugin procedure works well in finite samples. 
Keywords:  Asymptotic expansion, consistent HAC estimation, datadetermined kernel estimation, exact distribution, HAR inference, large rho asymptotics, long run variance, loss function, power parameter, sharp origin kernel 
JEL:  C13 C14 C22 C51 
Date:  2005–06 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1513&r=ets 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); Donggyu Sul (Dept. of Economics, University of Auckland) 
Abstract:  Some extensions of neoclassical growth models are discussed that allow for cross section heterogeneity among economies and evolution in rates of technological progress over time. The models offer a spectrum of transitional behavior among economies that includes convergence to a common steady state path as well as various forms of transitional divergence and convergence. Mechanisms for modeling such transitions and measuring them econometrically are developed in the paper. A new regression test of convergence is proposed, its asymptotic properties are derived and some simulations of its finite sample properties are reported. Transition curves for individual economies and subgroups of economies are estimated in a series of empirical applications of the methods to regional US data, OECD data and Penn World Table data. 
Keywords:  Economic growth, Growth convergence, Heterogeneity, Neoclassical growth, Relative transition, Transition curve, Transitional divergence 
JEL:  O30 O40 C33 
Date:  2005–06 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1514&r=ets 
By:  Chirok Han (Victoria University of Wellington); Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  This paper provides a first order asymptotic theory for generalized method of moments (GMM) estimators when the number of moment conditions is allowed to increase with the sample size and the moment conditions may be weak. Examples in which these asymptotics are relevant include instrumental variable (IV) estimation with many (possibly weak or uninformed) instruments and some panel data models covering moderate time spans and with correspondingly large numbers of instruments. Under certain regularity conditions, the GMM estimators are shown to converge in probability but not necessarily to the true parameter, and conditions for consistent GMM estimation are given. A general framework for the GMM limit distribution theory is developed based on epiconvergence methods. Some illustrations are provided, including consistent GMM estimation of a panel model with time varying individual effects, consistent LIML estimation as a continuously updated GMM estimator, and consistent IV structural estimation using large numbers of weak or irrelevant instruments. Some simulations are reported. 
Keywords:  Epiconvergence, GMM, Irrelevant instruments, IV, Large numbers of instruments, LIML estimation, Panel models, Pseudo true value, Signal, Signal Variability, Weak instrumentation 
JEL:  C22 C23 
Date:  2005–06 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1515&r=ets 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); Sainan Jin (Guanghua School of Management, Peking University); Ling Hu (Dept. of Economics, Ohio State University) 
Abstract:  We correct the limit theory presented in an earlier paper by Hu and Phillips (Journal of Econometrics, 2004) for nonstationary time series discrete choice models with multiple choices and thresholds. The new limit theory shows that, in contrast to the binary choice model with nonstationary regressors and a zero threshold where there are dual rates of convergence (n^{1/4} and n^{3/4}), all parameters including the thresholds converge at the rate n^{3/4}. The presence of nonzero thresholds therefore materially affects rates of convergence. Dual rates of convergence reappear when stationary variables are present in the system. Some simulation evidence is provided, showing how the magnitude of the thresholds affects finite sample performance. A new finding is that predicted probabilities and marginal effect estimates have finite sample distributions that manifest a pileup, or increasing density, towards the limits of the domain of definition. 
Keywords:  Brownian motion, Brownian local time, Discrete choices, Integrated processes, Pileup problem, Threshold parameters 
JEL:  C23 C25 
Date:  2005–06 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1516&r=ets 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); Tassos Magadalinos (Dept. of Mathematics, University of York) 
Abstract:  An asymptotic theory is given for autoregressive time series with weakly dependent innovations and a root of the form rho_{n} = 1+c/n^{alpha}, involving moderate deviations from unity when alpha in (0,1) and c in R are constant parameters. The limit theory combines a functional law to a diffusion on D[0,infinity) and a central limit theorem. For c > 0, the limit theory of the first order serial correlation coefficient is Cauchy and is invariant to both the distribution and the dependence structure of the innovations. To our knowledge, this is the first invariance principle of its kind for explosive processes. The rate of convergence is found to be n^{alpha}rho_{n}^{n}, which bridges asymptotic rate results for conventional local to unity cases (n) and explosive autoregressions ((1 + c)^{n}). For c < 0, we provide results for alpha in (0,1) that give an n^{(1+alpha)/2} rate of convergence and lead to asymptotic normality for the first order serial correlation, bridging the /n and n convergence rates for the stationary and conventional local to unity cases. Weakly dependent errors are shown to induce a bias in the limit distribution, analogous to that of the local to unity case. Linkages to the limit theory in the stationary and explosive cases are established. 
Keywords:  Central limit theory; Diffusion; Explosive autoregression, Local to unity; Moderate deviations, Unit root distribution, Weak dependence 
JEL:  C22 
Date:  2005–06 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1517&r=ets 
By:  Josu Arteche (Dpto. Economía Aplicada III (UPV/EHU)) 
Abstract:  The estimation of the memory parameter in perturbed long memory series has recently attracted attention motivated especially by the strong persistence of the volatility in many financial and economic time series and the use of Long Memory in Stochastic Volatility (LMSV) processes to model such a behaviour. This paper discusses frequency domain semiparametric estimation of the memory parameter and proposes an extension of the log periodogram regression which explicitly accounts for the added noise, comparing it, asymptotically and in finite samples, with similar extant techniques. Contrary to the non linear log periodogram regression of Sun and Phillips (2003), we do not use a linear approximation of the logarithmic term which accounts for the added noise. A reduction of the asymptotic bias is achieved in this way and makes possible a faster convergence in long memory signal plus noise series by permitting a larger bandwidth. Monte Carlo results confirm the bias reduction but at the cost of a higher variability. An application to a series of returns of the Spanish Ibex35 stock index is finally included. 
Keywords:  long memory, stochastic volatility, semiparametric estimation 
JEL:  C22 
Date:  2005–06–09 
URL:  http://d.repec.org/n?u=RePEc:ehu:biltok:200502&r=ets 
By:  Marcelle Chauvet; Jeremy M. Piger 
Abstract:  This paper evaluates the ability of formal rules to establish U.S. business cycle turning point dates in real time. We consider two approaches, a nonparametric algorithm and a parametric Markovswitching dynamicfactor model. In order to accurately assess the realtime performance of these rules, we construct a new unrevised "realtime" data set of employment, industrial production, manufacturing and trade sales, and personal income. We then apply the rules to this data set to simulate the accuracy and timeliness with which they would have identified the NBER business cycle chronology had they been used in real time for the past 30 years. Both approaches accurately identified the NBER dated turning points in the sample in real time, with no instances of false positives. Further, both approaches, and especially the Markovswitching model, yielded significant improvement over the NBER in the speed with which business cycle troughs were identified. In addition to suggesting that business cycle dating rules are an informative tool to use alongside the traditional NBER analysis, these results provide formal evidence regarding the speed with which macroeconomic data reveals information about new business cycle phases. 
Keywords:  Business cycles 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2005021&r=ets 
By:  Rothe, Christoph; Sibbertsen, Philipp 
Abstract:  In this paper, we propose PhillipsPerron type, semiparametric testing procedures to distinguish a unit root process from a meanreverting exponential smooth transition autoregressive one. The limiting nonstandard distributions are derived under very general conditions and simulation evidence shows that the tests perform better than the standard PhillipsPerron or DickeyFuller tests in the region of the null. 
Keywords:  Exponential smooth transition autoregressive model, Unit roots, Monte Carlo simulations, Purchasing Power Parity 
JEL:  C12 C32 
Date:  2005–06 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp315&r=ets 
By:  Stephen Pudney (Institute for Fiscal Studies and Institute for Social and Economic Research) 
Abstract:  We develop a simulated ML method for shortpanel estimation of one or more dynamic linear equations, where the dependent variables are only partially observed through ordinal scales. We argue that this latent autoregression (LAR) model is often more appropriate than the usual statedependence (SD) probit model for attitudinal and interval variables. We propose a score test for assisting in the treatment of initial conditions and a new simulation approach to calculate the required partial derivative matrices. An illustrative application to a model of households’ perceptions of their financial wellbeing demonstrates the superior fit of the LAR model. 
Keywords:  Dynamic panel data models, ordinal variables, simulated maximum likelihood, GHK simulator, BHPS 
JEL:  C23 C25 C33 C35 D84 
Date:  2005–06 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:05/05&r=ets 
By:  Oliver Linton (Institute for Fiscal Studies and London School of Economics) 
Abstract:  Estimation of heteroskedasticity and autocorrelation consistent covariance matrices (HACs) is a well established problem in time series. Results have been established under a variety of weak conditions on temporal dependence and heterogeneity that allow one to conduct inference on a variety of statistics, see Newey and West (1987), Hansen (1992), de Jong and Davidson (2000), and Robinson (2004). Indeed there is an extensive literature on automating these procedures starting with Andrews (1991). Alternative methods for conducting inference include the bootstrap for which there is also now a very active research program in time series especially, see Lahiri (2003) for an overview. One convenient method for time series is the subsampling approach of Politis, Romano, andWolf (1999). This method was used by Linton, Maasoumi, andWhang (2003) (henceforth LMW) in the context of testing for stochastic dominance. This paper is concerned with the practical problem of conducting inference in a vector time series setting when the data is unbalanced or incomplete. In this case, one can work only with the common sample, to which a standard HAC/bootstrap theory applies, but at the expense of throwing away data and perhaps losing effciency. An alternative is to use some sort of imputation method, but this requires additional modelling assumptions, which we would rather avoid.1 We show how the sampling theory changes and how to modify the resampling algorithms to accommodate the problem of missing data. We also discuss effciency and power. Unbalanced data of the type we consider are quite common in financial panel data, see for example Connor and Korajczyk (1993). These data also occur in crosscountry studies. 
Date:  2004–04 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:06/04&r=ets 
By:  Richard Smith (Institute for Fiscal Studies and University of Warwick) 
Abstract:  This paper proposes a new class of HAC covariance matrix estimators. The standard HAC estimation method reweights estimators of the autocovariances. Here we initially smooth the data observations themselves using kernel function based weights. The resultant HAC covariance matrix estimator is the normalised outer product of the smoothed random vectors and is therefore automatically positive semidefinite. A corresponding efficient GMM criterion may also be defined as a quadratic form in the smoothed moment indicators whose normalised minimand provides a test statistic for the overidentifying moment conditions. 
Keywords:  GMM, HAC Covariance Matrix Estimation, Overidentifying Moments 
JEL:  C13 C30 
Date:  2004–12 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:17/04&r=ets 
By:  Alessio Moneta; Peter Spirtes 
Abstract:  Vector Autoregressions (VARs) are a class of time series models commonly used in econometrics to study the dynamic effect of exogenous shocks to the economy. While the estimation of a VAR is straightforward, there is a problem of finding the transformation of the estimated model consistent with the causal relations among the contemporaneous variables. Such problem, which is a version of what is called in econometrics “the problem of identification,” is faced in this paper using a semiautomated search procedure. The unobserved causal relations of the structural form, to be identified, are represented by a directed graph. Discovery algorithms are developed to infer features of the causal graph from tests on vanishing partial correlations among the VAR residuals. Such tests cannot be based on the usual tests of conditional independence, because of sampling problems due to the time series nature of the data. This paper proposes consistent tests on vanishing partial correlations based on the asymptotic distribution of the estimated VAR residuals. Two different types of search algorithm are considered. A first algorithm restricts the analysis to direct causation among the contemporaneous variables, a second algorithm allows the possibility of cycles (feedback loops) and common shocks among contemporaneous variables. Recovering the causal structure allows a reliable transformation of the estimated vector autoregressive model which is very useful for macroeconomic empirical investigations, such as comparing the effects of different shocks (real vs. nominal) on the economy and finding a measure of the monetary policy shock. 
Keywords:  VARs, Problem of Identification, Causal Graphs, Structural Shocks 
URL:  http://d.repec.org/n?u=RePEc:ssa:lemwps:2005/14&r=ets 
By:  Håvard Hungnes (Statistics Norway) 
Abstract:  The paper describes a procedure for decomposing the deterministic terms in cointegrated VAR models into growth rate parameters and cointegration mean parameters. These parameters express longrun properties of the model. For example, the growth rate parameters tell us how much to expect (unconditionally) the variables in the system to grow from one period to the next, representing the underlying (steady state) growth in the variables. The procedure can be used for analysing structural breaks when the deterministic terms include shift dummies and broken trends. By decomposing the coefficients into interpretable components, different types of structural breaks can be identified. Both shifts in intercepts and shifts in growth rates, or combinations of these, can be tested for. The ability to distinguish between different types of structural breaks makes the procedure superior compared to alternative procedures. Furthermore, the procedure utilizes the information more efficiently than alternative procedures. Finally, interpretable coefficients of different types of structural breaks can be identified. 
Keywords:  Johansen procedure; cointegrated VAR; structural breaks; growth rates; cointegration mean levels. 
JEL:  C32 C51 C52 
Date:  2005–05 
URL:  http://d.repec.org/n?u=RePEc:ssb:dispap:422&r=ets 
By:  Luca Grilli; Angelo Sfrecola 
Abstract:  In this paper we consider financial time series from U.S. Fixed Income Market, S&P500, Exchange Market and Oil Market. It is well known that financial time series reveal some anomalies as regards the Efficient Market Hypotesis and some scaling behavior is evident such as fat tails and clustered volatility. This suggests to consider financial time serie as "pseudo"random time series. For this kind of time series the power of prediction of neural networks has been shown to be appreciable. We first consider the financial time serie from the Minority Game point of view and than we apply a neural network with learning algorithm in order to analyze its prediction power. We show that Fixed Income Market presents many differences from other markets in terms of predictability as a measure of market efficiency. 
Keywords:  Minority Game, Learning Algorithms, Neural Networks, Financial Time Series, Efficient Market Hypotesis 
JEL:  C45 C70 C22 G14 
Date:  2005–06 
URL:  http://d.repec.org/n?u=RePEc:ufg:qdsems:142005&r=ets 
By:  Mahesh Kumar Tambi (IIMT, HyderabadIndia) 
Abstract:  In this paper we tried to build univariate model to forecast exchange rate of Indian Rupee in terms of different currencies like SDR, USD, GBP, Euro and JPY. Paper uses BoxJenkins Methodology of building ARIMA model. Sample data for the paper was taken from March 1992 to June 2004, out of which data till December 2002 were used to build the model while remaining data points were used to do out of sample forecasting and check the forecasting ability of the model. All the data were collected from Indiastat database. Result of the paper shows that ARIMA models provides a better forecasting of exchange rates than simple auto regressive models or moving average models. We were able to build model for all the currencies, except USD, which shows the relative efficiency of the USD currency market. 
Keywords:  Exchange rate forecasting, univariate analysis, ARIMA, Box Jenkins methodology, out of sample approach 
JEL:  F3 F4 
Date:  2005–06–08 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpif:0506005&r=ets 
By:  Cheng Hsiao; Siyan Wang 
Abstract:  We consider the estimation of a structural vector autoregressive model of nonstationary and possibly cointegrated variables without the prior knowledge of unit roots or rank of cointegration. We propose two modified two stage least squares estimators that are consistent and have limiting distributions that are either normal or mixed normal. Limited Monte Carlo studies are also conducted to evaluate their finite sample properties. 
Keywords:  Structural vector autoregression; Unit root; Cointegration; Asymptotic properties; Hypothesis testing 
JEL:  C32 C12 C13 
Date:  2005–05 
URL:  http://d.repec.org/n?u=RePEc:scp:wpaper:0523&r=ets 