
on Econometric Time Series 
By:  Gary Koop (University of Strathclyde, UK and The RImini Centre for Economic Analisys, Italy); Roberto LeonGonzalez (National Graduate Institute for Policy Studies, Japan and The RImini Centre for Economic Analisys  Italy); Rodney W. Strachan (University of Queensland, Australia and The RImini Centre for Economic Analisys  Italy) 
Abstract:  Empirical macroeconomists are increasingly using models (e.g. regressions or Vector Autoregressions) where the parameters vary over time. State space methods are frequently used to specify the evolution of parameters in such models. In any application, there are typically restrictions on the parameters that a researcher might be interested in. This motivates the question of how to calculate the probability that a restriction holds at a point in time without assuming the restriction holds at all (or any other) points in time. This paper develops methods to answer this question. In particular, the principle of the SavageDickey density ratio is used to obtain the timevarying posterior probabilities of restrictions. We use our methods in a macroeconomic application involving the Phillips curve. Macroeconomists are interested in whether the longrun Phillips curve is vertical. This is a restriction for which we can calculate the posterior probability using our methods. Using U.S. data, the probability that this restriction holds tends to be fairly high, but decreases slightly over time (apart from a slight peak in the late 1970s). We also calculate the probability that another restriction, that the NAIRU is not identied, holds. The probability that it holds uctuates over time with most evidence in favor of the restriction occurring after 1990. 
Keywords:  Bayesian, state space model, SavageDickey density ratio, time varying parameter model. 
JEL:  C11 C32 E52 
Date:  2008–01 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:2608&r=ets 
By:  John Geweke (University of Iowa. USA); Gianni Amisano (University of Brescia  Italy, European Central Bank and The RImini Centre for Economic Analisys  Italy) 
Abstract:  A prediction model is any statement of a probability distribution for an outcome not yet observed. This study considers the properties of weighted linear combinations of n prediction models, or linear pools, evaluated using the conventional log predictive scoring rule. The log score is a concave function of the weights and, in general, an optimal linear combination will include several models with positive weights despite the fact that exactly one model has limiting posterior probability one. The paper derives several interesting formal results: for example, a prediction model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with prediction models from the ARCH, stochastic volatility and Markov mixture families. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools, and these pools substantially outperform their best components. 
Keywords:  forecasting; GARCH; log scoring; Markov mixture; model combination; S&P 500 returns; stochastic volatility 
Date:  2008–01 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:2208&r=ets 
By:  Gary Koop (University of Strathclyde, UK and The RImini Centre for Economic Analisys, Italy); Roberto LeonGonzalez (National Graduate Institute for Policy Studies, Japan and The RImini Centre for Economic Analisys  Italy); Rodney W. Strachan (University of Queensland, Australia and The RImini Centre for Economic Analisys  Italy) 
Abstract:  There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with timevarying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit cointegration. Timevarying parameter VARs (TVPVARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVPVARs when allowing for cointegration. Instead we develop a specication which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVPVARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for ination. 
Keywords:  Bayesian, time varying cointegration, error correctionmodel, reduced rank regression, Markov Chain Monte Carlo. 
JEL:  C11 C32 C33 
Date:  2008–01 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:2308&r=ets 
By:  Christophe Planas (Joint Research Centre of the European Commission); Alessandro Rossi (Joint Research Centre of the European Commission); Gabriele Fiorentini (University of Florence, Italy and The Rimini Centre for Economic Analysis, Italy) 
Abstract:  We propose a simple procedure for evaluating the marginal likelihood in univariate Structural Time Series (STS) models. For this we exploit the statistical properties of STS models and the results in Dickey (1968) to obtain the likelihood function marginally to the variance parameters. This strategy applies under normalinverted gamma2 prior distributions for the structural shocks and associated variances. For trend plus noise models such as the local level and the local linear trend, it yields the marginal likelihood by simple or double integration over the (0,1)support. For trend plus cycle models, we show that marginalizing out the variance parameters greatly improves the accuracy of the Laplace method. We apply this ethodology to the analysis of US and euro area NAIRU. 
Keywords:  Marginal likelihood, Markov Chain Monte Carlo, unobserved components, bridge sampling, Laplace method, NAIRU 
Date:  2008–01 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:2108&r=ets 
By:  Claude Lopez; Christian J. Murray; David H. Papell 
Abstract:  Using medianunbiased estimation, recent research has questioned the validity of Rogoff’s “remarkable consensus” of 35 year halflives of deviations from PPP. These halflife estimates, however, are based on estimates from regressions where the resulting unit root test has low power. We extend medianunbiased estimation to the DFGLS regression of Elliott, Rothenberg, and Stock (1996). We find that medianunbiased estimation based on this regression has the potential to tighten confidence intervals for halflives. Using long horizon real exchange rate data, we find that the typical lower bound of the confidence intervals for medianunbiased halflives is just under 3 years. Thus, while previous confidence intervals for halflives are consistent with virtually anything, our tighter confidence intervals now rule out economic models with nominal rigidities as candidates for explaining the observed behavior of real exchange rates. Therefore, while we obtain more information using efficient unit root tests on longer term data, this information moves us away from solving the PPP puzzle. 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:cin:ucecwp:200805&r=ets 
By:  Marçal, Emerson F.; Valls Pereira, Pedro L. 
Abstract:  This aim of this paper is to test whether or not there was evidence of financial crises ‘contagion’. The sovereignty debt bonds data for Brazil, Mexico, Russia and Argentine were used to implement such test. The ‘contagion’ hypothesis is tested using multivariate volatility models. It’s considered evidence in favor of ‘contagion’ hypothesis if there is indication of structural instability that can be linked in any sense to one financial crisis. The result suggests that there is evidence in favor of ‘contagion’ hypothesis 
Keywords:  Contagion; Multivariate Volatility Models 
JEL:  C32 G15 
Date:  2008–09–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:10356&r=ets 
By:  Eo, Yunjong; Morley, James C. 
Abstract:  In this paper, we propose a new approach to constructing confidence sets for the timing of structural breaks. This approach involves using Markovchain Monte Carlo methods to simulate marginal “fiducial” distributions of break dates from the likelihood function. We compare our proposed approach to asymptotic and bootstrap confidence sets and find that it performs best in terms of producing short confidence sets with accurate coverage rates. Our approach also has the advantages of i) being broadly applicable to different patterns of structural breaks, ii) being computationally efficient, and iii) requiring only the ability to evaluate the likelihood function over parameter values, thus allowing for many possible distributional assumptions for the data. In our application, we investigate the nature and timing of structural breaks in postwar U.S. Real GDP. Based on marginal fiducial distributions, we find much tighter 95% confidence sets for the timing of the socalled “Great Moderation” than has been reported in previous studies. 
Keywords:  Fiducial Inference; Bootstrap Methods; Structural Breaks; Confidence Intervals and Sets; Coverage Accuracy and Expected Length; Markovchain Monte Carlo; 
JEL:  C15 C22 
Date:  2008–09–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:10372&r=ets 