nep-ets New Economics Papers
on Econometric Time Series
Issue of 2005‒10‒15
nine papers chosen by
Yong Yin
SUNY at Buffalo

  1. The correlation integral and the independence of stochastic processes By Dechert,W.D.
  2. Forecasting with real-time macroeconomic data: the ragged-edge problem and revisions By Bouwman, Kees E.; Jacobs, Jan P.A.M.
  3. Evaluating a Central Bank’s Recent Forecast Failure By Nymoen, Ragnar
  4. Modern Forecasting Models in Action: Improving Macroeconomic Analyses at Central Banks By Adolfson, Malin; Andersson, Michael K.; Lindé, Jesper; Villani, Mattias; Vredin, Anders
  5. Bayesian Inference of General Linear Restrictions on the Cointegration Space By Villani, Mattias
  6. Pooling-based Data Interpolation and Backdating By Massimiliano Marcellino
  7. Markov Forecasting Methods for Welfare Caseloads By Jeffrey Grogger
  8. Compositional Time Series: Past and Present By Juan M.C. Larrosa
  9. News or Noise? Signal Extraction Can Generate Volatility Clusters From IID Shocks By Prasad Bidarkota; J. Huston McCulloch

  1. By: Dechert,W.D. (University of Wisconsin-Madison, Social Systems Research Institute)
    Date: 2005
  2. By: Bouwman, Kees E.; Jacobs, Jan P.A.M. (Groningen University)
    Abstract: Real-time macroeconomic data are typically incomplete for today and the immediate past (‘ragged edge’) and subject to revision. To enable more timely forecasts the recent missing data have to be dealt with. In the context of the U.S. leading index we assess four alternatives, paying explicit attention to publication lags and data revisions.
    Date: 2005
  3. By: Nymoen, Ragnar (Dept. of Economics, University of Oslo)
    Abstract: Failures are not rare in economic forecasting, probably due to the high incidence of shocks and regime shifts in the economy. Thus, there is a premium on adaptation in the forecast process, in order to avoid sequences of forecast failure. This paper evaluates a sequence of inflation forecasts in the Norges Bank Inflation Report, and we present automatized forecasts which are unaffected by forecast failure. One conclusion is that the Norges Bank fan-charts are too narrow, giving an illusion of very precise forecasts. The automatized forecasts show more adaptation once shocks have occurred than is the case for the official forecasts. On the basis of the evidence, the recent inflation forecast failure appears to have been largely avoidable. The central bank’s understanding of the nature of the transmission mechanism and of the strenght and nature of the disinflationly shock that hit the economy appear to have played a major role in the recent forecast failure.
    Keywords: Inflation forecasts; Monetary policy; Forecast uncertainty; Fan-charts; Structural change; Econometric models.
    JEL: C32 C53 E37 E44 E47 E52 E58
    Date: 2005–08–10
  4. By: Adolfson, Malin (Research Department, Central Bank of Sweden); Andersson, Michael K. (Monetary Policy Department, Central Bank of Sweden); Lindé, Jesper (Research Department, Central Bank of Sweden); Villani, Mattias (Research Department, Central Bank of Sweden); Vredin, Anders (Monetary Policy Department, Central Bank of Sweden)
    Abstract: There are many indications that formal methods based on economic research are not used to their full potential by central banks today. For instance, Christopher Sims published a review in 2002 where he argued that central banks use models that ”are now fit to data by ad hoc procedures that have no grounding in statistical theory”. There is no organized resistance against formal models at central banks, but the proponents of such models have not always been able to present convincing evidence of the models’ advantages. In this paper we demonstrate how BVAR and DSGE models can be used to shed light on questions that policy makers deal with in practice. We compare the forecast performance of BVAR and DSGE models with the Riksbank’s official, more subjective forecasts. We also use the models to interpret the low inflation rate in Sweden in 2003 - 2004.
    Keywords: Bayesian inference; DSGE models; Forecasting; Monetary policy; Subjective forecasting; Vector autoregressions
    JEL: E37 E47 E52
    Date: 2005–09–01
  5. By: Villani, Mattias (Research Department, Central Bank of Sweden)
    Abstract: The degree of empirical support of a priori plausible structures on the cointegration vectors has a central role in the analysis of cointegration. Villani (2000) and Strachan and van Dijk (2003) have recently proposed finite sample Bayesian procedures to calculate the posterior probability of restrictions on the cointegration space, using the existence of a uniform prior distribution on the cointegration space as the key ingredient. The current paper extends this approach to the empirically important case with different restrictions on the individual cointegration vectors. Prior distributions are proposed and posterior simulation algorithms are developed. Consumers' expenditure data for the US is used to illustrate the robustness of the results to variations in the prior. A simulation study shows that the Bayesian approach performs remarkably well in comparison to other more established methods for testing restrictions on the cointegration vectors.
    Keywords: Bayesian inference; Cointegration; Posterior probability; Restrictions.
    JEL: C11 C12
    Date: 2005–09–01
  6. By: Massimiliano Marcellino
    Abstract: Pooling forecasts obtained from different procedures typically reduces the mean square forecast error and more generally improves the quality of the forecast. In this paper we evaluate whether pooling interpolated or backdated time series obtained from different procedures can also improve the quality of the generated data. Both simulation results and empirical analyses with macroeconomic time series indicate that pooling plays a positive and important role also in this context.
  7. By: Jeffrey Grogger
    Abstract: Forecasting welfare caseloads, particularly turning points, has become more important than ever. Since welfare reform, welfare has been funded via a block grant, which means that unforeseen changes in caseloads can have important fiscal implications for states. In this paper I develop forecasts based on the theory of Markov chains. Since today's caseload is a function of the past caseload, the caseload exhibits inertia. The method exploits that inertia, basing forecasts of the future caseload on past functions of entry and exit rates. In an application to California welfare data, the method accurately predicted the late-2003 turning point roughly one year in advance.
    JEL: I3
    Date: 2005–10
  8. By: Juan M.C. Larrosa (CONICET-Universidad Nacional del Sur)
    Abstract: This survey reviews diverse academic production on compositional dynamic series analysis. Although time dimension of compositional series has been little investigated, this kind of data structure is widely available and utilized in social sciences research. This way, a review of the state-of-the-art on this topic is required for scientist to understand the available options. The review comprehends the analysis of several techniques like autoregresive integrate moving average (ARIMA) analysis, compositional vector autoregression systems (CVAR) and state space techniques but most of these are developed under Bayesian frameworks. As conclusion, this branch of the compositional statistical analysis still requires a lot of advances and updates and, for this same reason, is a fertile field for future research. Social scientists should pay attention to future developments due to the extensive availability of this kind of data structures in socioeconomic databases.
    Keywords: compositional data analysis, time series
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–10–13
  9. By: Prasad Bidarkota (Department of Economics, Florida International University); J. Huston McCulloch (Department of Economics, Ohio State University)
    Abstract: We develop a framework in which information about firm value is noisily observed. Investors are then faced with a signal extraction problem. Solving this would enable them to probabilistically infer the fundamental value of the firm and, hence, price its stocks. If the innovations driving the fundamental value of the firm and the noise that obscures this fundamental value in observed data come from non-Gaussian thick-tailed probability distributions, then the implied stock returns could exhibit volatility clustering. We demonstrate the validity of this effect with a simulation study.
    Keywords: stock returns, volatility clusters, GARCH processes, signal extraction, thick-tailed distributions, simulations
    JEL: C22 E31 C53
    Date: 2003–11

This nep-ets issue is ©2005 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.