nep-ets New Economics Papers
on Econometric Time Series
Issue of 2008‒09‒13
seven papers chosen by
Yong Yin
SUNY at Buffalo

  1. Dynamic probabilities of restrictions in state space models: An application to the Phillips curve By Gary Koop; Roberto Leon-Gonzalez; Rodney W. Strachan
  2. Optimal Prediction Pools By John Geweke; Gianni Amisano
  3. Bayesian Inference in the Time Varying Cointegration Model By Gary Koop; Roberto Leon-Gonzalez; Rodney W. Strachan
  4. The marginal likelihood of Structural Time Series Models, with application to the euroareaa nd US NAIRU By Christophe Planas; Alessandro Rossi; Gabriele Fiorentini
  5. Median-Unbiased Estimation in DF-GLS Regressions and the PPP Puzzle By Claude Lopez; Christian J. Murray; David H. Papell
  6. Testing the contagion hypotheses using multivariate volatility models By Marçal, Emerson F.; Valls Pereira, Pedro L.
  7. Likelihood-Based Confidence Sets for the Timing of Structural Breaks By Eo, Yunjong; Morley, James C.

  1. By: Gary Koop (University of Strathclyde, UK and The RImini Centre for Economic Analisys, Italy); Roberto Leon-Gonzalez (National Graduate Institute for Policy Studies, Japan and The RImini Centre for Economic Analisys - Italy); Rodney W. Strachan (University of Queensland, Australia and The RImini Centre for Economic Analisys - Italy)
    Abstract: Empirical macroeconomists are increasingly using models (e.g. regressions or Vector Autoregressions) where the parameters vary over time. State space methods are frequently used to specify the evolution of parameters in such models. In any application, there are typically restrictions on the parameters that a researcher might be interested in. This motivates the question of how to calculate the probability that a restriction holds at a point in time without assuming the restriction holds at all (or any other) points in time. This paper develops methods to answer this question. In particular, the principle of the Savage-Dickey density ratio is used to obtain the time-varying posterior probabilities of restrictions. We use our methods in a macroeconomic application involving the Phillips curve. Macroeconomists are interested in whether the long-run Phillips curve is vertical. This is a restriction for which we can calculate the posterior probability using our methods. Using U.S. data, the probability that this restriction holds tends to be fairly high, but decreases slightly over time (apart from a slight peak in the late 1970s). We also calculate the probability that another restriction, that the NAIRU is not identied, holds. The probability that it holds uctuates over time with most evidence in favor of the restriction occurring after 1990.
    Keywords: Bayesian, state space model, Savage-Dickey density ratio, time varying parameter model.
    JEL: C11 C32 E52
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:26-08&r=ets
  2. By: John Geweke (University of Iowa. USA); Gianni Amisano (University of Brescia - Italy, European Central Bank and The RImini Centre for Economic Analisys - Italy)
    Abstract: A prediction model is any statement of a probability distribution for an outcome not yet observed. This study considers the properties of weighted linear combinations of n prediction models, or linear pools, evaluated using the conventional log predictive scoring rule. The log score is a concave function of the weights and, in general, an optimal linear combination will include several models with positive weights despite the fact that exactly one model has limiting posterior probability one. The paper derives several interesting formal results: for example, a prediction model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with prediction models from the ARCH, stochastic volatility and Markov mixture families. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools, and these pools substantially outperform their best components.
    Keywords: forecasting; GARCH; log scoring; Markov mixture; model combination; S&P 500 returns; stochastic volatility
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:22-08&r=ets
  3. By: Gary Koop (University of Strathclyde, UK and The RImini Centre for Economic Analisys, Italy); Roberto Leon-Gonzalez (National Graduate Institute for Policy Studies, Japan and The RImini Centre for Economic Analisys - Italy); Rodney W. Strachan (University of Queensland, Australia and The RImini Centre for Economic Analisys - Italy)
    Abstract: There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit cointegration. Time-varying parameter VARs (TVP-VARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVP-VARs when allowing for cointegration. Instead we develop a specication which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for ination.
    Keywords: Bayesian, time varying cointegration, error correctionmodel, reduced rank regression, Markov Chain Monte Carlo.
    JEL: C11 C32 C33
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:23-08&r=ets
  4. By: Christophe Planas (Joint Research Centre of the European Commission); Alessandro Rossi (Joint Research Centre of the European Commission); Gabriele Fiorentini (University of Florence, Italy and The Rimini Centre for Economic Analysis, Italy)
    Abstract: We propose a simple procedure for evaluating the marginal likelihood in univariate Structural Time Series (STS) models. For this we exploit the statistical properties of STS models and the results in Dickey (1968) to obtain the likelihood function marginally to the variance parameters. This strategy applies under normal-inverted gamma-2 prior distributions for the structural shocks and associated variances. For trend plus noise models such as the local level and the local linear trend, it yields the marginal likelihood by simple or double integration over the (0,1)-support. For trend plus cycle models, we show that marginalizing out the variance parameters greatly improves the accuracy of the Laplace method. We apply this ethodology to the analysis of US and euro area NAIRU.
    Keywords: Marginal likelihood, Markov Chain Monte Carlo, unobserved components, bridge sampling, Laplace method, NAIRU
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:21-08&r=ets
  5. By: Claude Lopez; Christian J. Murray; David H. Papell
    Abstract: Using median-unbiased estimation, recent research has questioned the validity of Rogoff’s “remarkable consensus” of 3-5 year half-lives of deviations from PPP. These half-life estimates, however, are based on estimates from regressions where the resulting unit root test has low power. We extend median-unbiased estimation to the DF-GLS regression of Elliott, Rothenberg, and Stock (1996). We find that median-unbiased estimation based on this regression has the potential to tighten confidence intervals for half-lives. Using long horizon real exchange rate data, we find that the typical lower bound of the confidence intervals for median-unbiased half-lives is just under 3 years. Thus, while previous confidence intervals for half-lives are consistent with virtually anything, our tighter confidence intervals now rule out economic models with nominal rigidities as candidates for explaining the observed behavior of real exchange rates. Therefore, while we obtain more information using efficient unit root tests on longer term data, this information moves us away from solving the PPP puzzle.
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:cin:ucecwp:2008-05&r=ets
  6. By: Marçal, Emerson F.; Valls Pereira, Pedro L.
    Abstract: This aim of this paper is to test whether or not there was evidence of financial crises ‘contagion’. The sovereignty debt bonds data for Brazil, Mexico, Russia and Argentine were used to implement such test. The ‘contagion’ hypothesis is tested using multivariate volatility models. It’s considered evidence in favor of ‘contagion’ hypothesis if there is indication of structural instability that can be linked in any sense to one financial crisis. The result suggests that there is evidence in favor of ‘contagion’ hypothesis
    Keywords: Contagion; Multivariate Volatility Models
    JEL: C32 G15
    Date: 2008–09–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:10356&r=ets
  7. By: Eo, Yunjong; Morley, James C.
    Abstract: In this paper, we propose a new approach to constructing confidence sets for the timing of structural breaks. This approach involves using Markov-chain Monte Carlo methods to simulate marginal “fiducial” distributions of break dates from the likelihood function. We compare our proposed approach to asymptotic and bootstrap confidence sets and find that it performs best in terms of producing short confidence sets with accurate coverage rates. Our approach also has the advantages of i) being broadly applicable to different patterns of structural breaks, ii) being computationally efficient, and iii) requiring only the ability to evaluate the likelihood function over parameter values, thus allowing for many possible distributional assumptions for the data. In our application, we investigate the nature and timing of structural breaks in postwar U.S. Real GDP. Based on marginal fiducial distributions, we find much tighter 95% confidence sets for the timing of the so-called “Great Moderation” than has been reported in previous studies.
    Keywords: Fiducial Inference; Bootstrap Methods; Structural Breaks; Confidence Intervals and Sets; Coverage Accuracy and Expected Length; Markov-chain Monte Carlo;
    JEL: C15 C22
    Date: 2008–09–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:10372&r=ets

This nep-ets issue is ©2008 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.