nep-ets New Economics Papers
on Econometric Time Series
Issue of 2006‒09‒23
thirteen papers chosen by
Yong Yin
SUNY at Buffalo

  1. Intraday Seasonalities and Macroeconomic News Announcements By Harju, Kari; Hussain, Mujahid
  2. Bootstrap and Fast Double Bootstrap Tests of Cointegration Rank with Financial Time Series By Ahlgren, Niklas; Antell, Jan
  5. Forecasting using Bayesian and Information Theoretic Model Averaging: An Application to UK Inflation By George Kapetanios; Vincent Labhard; Simon Price
  6. Forecasting Using Predictive Likelihood Model Averaging By George Kapetanios; Vincent Labhard; Simon Price
  7. Stochastic Volatility Driven by Large Shocks By George Kapetanios; Elias Tzavalis
  8. Panels with Nonstationary Multifactor Error Structures By George Kapetanios; M. Hashem Pesaran; Takashi Yamagata
  9. Nonlinear Models with Strongly Dependent Processes and Applications to Forward Premia and Real Exchange Rates By Richard T. Baillie; George Kapetanios
  10. A quasi maximum likelihood approach for large approximate dynamic factor models By Catherine Doz; Domenico Giannone; Lucrezia Reichlin
  11. Testing for the Cointegrating Rank of a VAR Process with Level Shift and Trend Break By Carsten Trenkler; Pentti Saikkonen; Helmut Lütkepohl
  12. Forecasting with panel data By Baltagi, Badi H.
  13. Empirical Bayesian density forecasting in Iowa and shrinkage for the Monte Carlo era By Lewis, Kurt F.; Whiteman, Charles H.

  1. By: Harju, Kari (Swedish School of Economics and Business Administration); Hussain, Mujahid (Swedish School of Economics and Business Administration)
    Abstract: Using a data set consisting of three years of 5-minute intraday stock index returns for major European stock indices and U.S. macroeconomic surprises, the conditional mean and volatility behaviors in European market were investigated. The findings suggested that the opening of the U.S market significantly raised the level of volatility in Europe, and that all markets respond in an identical fashion. Furthermore, the U.S. macroeconomic surprises exerted an immediate and major impact on both European stock markets’ returns and volatilities. Thus, high frequency data appear to be critical for the identification of news that impacted the markets.
    Keywords: Macroeconomic surprises; intraday seasonality; Flexible Fourier Form; conditional mean; conditional volatility; information spillover
    Date: 2006–09–13
  2. By: Ahlgren, Niklas (Swedish School of Economics and Business Administration); Antell, Jan (Swedish School of Economics and Business Administration)
    Abstract: The likelihood ratio test of cointegration rank is the most widely used test for cointegration. Many studies have shown that its finite sample distribution is not well approximated by the limiting distribution. The article introduces and evaluates by Monte Carlo simulation experiments bootstrap and fast double bootstrap (FDB) algorithms for the likelihood ratio test. It finds that the performance of the bootstrap test is very good. The more sophisticated FDB produces a further improvement in cases where the performance of the asymptotic test is very unsatisfactory and the ordinary bootstrap does not work as well as it might. Furthermore, the Monte Carlo simulations provide a number of guidelines on when the bootstrap and FDB tests can be expected to work well. Finally, the tests are applied to US interest rates and international stock prices series. It is found that the asymptotic test tends to overestimate the cointegration rank, while the bootstrap and FDB tests choose the correct cointegration rank.
    Keywords: Bootstrap; Cointegration; Financial time series; Likelihood ratio test
    Date: 2006–09–14
  3. By: John Galbraith; Greg Tkacz
    Abstract: For stationary transformations of variables, there exists a maximum horizon beyond which forecasts can provide no more information about the variable than is present in the unconditional mean. Meteorological forecasts, typically excepting only experimental or exploratory situations, are not reported beyond this horizon; by contrast, little generally-accepted information about such maximum horizons is available for economic variables. In this paper we estimate such content horizons for a variety of economic variables, and compare these with the maximum horizons which we observe reported in a large sample of empirical economic forecasting studies. We find that there are many instances of published studies which provide forecasts exceeding, often by substantial margins, our estimates of the content horizon for the particular variable and frequency. We suggest some simple reporting practices for forecasts that could potentially bring greater transparency to the process of making the interpreting economic forecasts.
    JEL: C53
    Date: 2006–09
  4. By: John Galbraith; Serguei Zernov
    Abstract: Dependence among large observations in equity markets is usually examined using second-moment models such as those from the GARCH or SV classes. Such models treat the entire set of returns, and tend to produce very similar estimates on the major equity markets, with a sum of estimated GARCH parameters, for example, slightly below one. Using dependence measures from extreme value theory, however, it is possible to characterie dependence among only the largest (or largest negative) financial returns; these alternative characterizations of clustering have important applications in risk management. In this paper we compare the NASDAQ and degree of extreme dependence. Although GARCH-type characterizations of second-moment dependence in the two markets produce similar results, the same is not true in the extremes: we find significantly more extreme dependence in the NASDAQ returns. More generally, the study of extreme dependence may reveal contrasts which are obscured when examining the conditional second moment.
    JEL: G10 G18
    Date: 2006–09
  5. By: George Kapetanios (Queen Mary, University of London); Vincent Labhard (Bank of England); Simon Price (Bank of England)
    Abstract: In recent years there has been increasing interest in forecasting methods that utilise large datasets, driven partly by the recognition that policymaking institutions need to process large quantities of information. Factor analysis is one popular way of doing this. Forecast combination is another, and it is on this that we concentrate. Bayesian model averaging methods have been widely advocated in this area, but a neglected frequentist approach is to use information theoretic based weights. We consider the use of model averaging in forecasting UK inflation with a large dataset from this perspective. We find that an information theoretic model averaging scheme can be a powerful alternative both to the more widely used Bayesian model averaging scheme and to factor models.
    Keywords: Forecasting, Inflation, Bayesian model averaging, Akaike criteria, Forecast combining
    JEL: C11 C15 C53
    Date: 2006–09
  6. By: George Kapetanios (Queen Mary, University of London); Vincent Labhard (Bank of England); Simon Price (Bank of England and City University)
    Abstract: Recently, there has been increasing interest in forecasting methods that utilise large datasets. We explore the possibility of forecasting with model averaging using the out-of-sample forecasting performance of various models in a frequentist setting, using the predictive likelihood. We apply our method to forecasting UK inflation and find that the new method performs well; in some respects it outperforms other averaging methods.
    Keywords: Forecasting, Inflation, Bayesian model averaging, Akaike criterion, Forecast combining
    JEL: C11 C15 C53
    Date: 2006–09
  7. By: George Kapetanios (Queen Mary, University of London); Elias Tzavalis (Queen Mary, University of London)
    Abstract: This paper presents a new model of stochastic volatility which allows for infrequent shifts in the mean of volatility, known as structural breaks. These are endogenously driven from large innovations in stock returns arriving in the market. The model has a number of interesting properties. Among them, it can allow for shifts in volatility which are of stochastic timing and magnitude. This model can be used to distinguish permanent shifts in volatility coming from large pieces of news arriving in the market, from ordinary volatility shocks.
    Keywords: Stochastic volatility, Structural breaks
    JEL: C22 C15
    Date: 2006–09
  8. By: George Kapetanios (Queen Mary, University of London); M. Hashem Pesaran (Cambridge University and Trinity College, Cambridge); Takashi Yamagata (Cambridge University)
    Abstract: The presence of cross-sectionally correlated error terms invalidates much inferential theory of panel data models. Recently work by Pesaran (2006) has suggested a method which makes use of cross-sectional averages to provide valid inference for stationary panel regressions with multifactor error structure. This paper extends this work and examines the important case where the unobserved common factors follow unit root processes and could be cointegrated. It is found that the presence of unit roots does not affect most theoretical results which continue to hold irrespective of the integration and the cointegration properties of the unobserved factors. This finding is further supported for small samples via an extensive Monte Carlo study. In particular, the results of the Monte Carlo study suggest that the cross-sectional average based method is robust to a wide variety of data generation processes and has lower biases than all of the alternative estimation methods considered in the paper.
    Keywords: Cross section dependence, Large panels, Unit roots, Principal components, Common correlated effects
    JEL: C12 C13 C33
    Date: 2006–09
  9. By: Richard T. Baillie (Michigan State University and Queen Mary, University of London); George Kapetanios (Queen Mary, University of London)
    Abstract: This paper considers estimation and inference in some general non linear time series models which are embedded in a strongly dependent, long memory process. Some new results are provided on the properties of a time domain <i>MLE</i> for these models. The paper also includes a detailed simulation study which compares the time domain <i>MLE</i> with a two step estimator, where the Local Whittle estimator has been initially employed to filter out the long memory component. The time domain <i>MLE</i> is found to be generally superior to two step estimation. Further, the simulation study documents the difficulty of precisely estimating the parameter associated with the speed of transition. Finally, the fractionally integrated, nonlinear autoregressive-<i>ESTAR</i> model is found to be extremely useful in representing some financial time series such as the forward premium and real exchange rates.
    Keywords: Non-linearity, <i>ESTAR</i> models, Strong dependence, Forward premium, Real exchange rates
    JEL: C22 C12 F31
    Date: 2006–09
  10. By: Catherine Doz (Directrice de l'UFR Economie Gestion, University of Cergy-Pontoise - Department of Economics, 33 Boulevard du port, F-95011 Cergy-Pontoise Cedex, France.); Domenico Giannone (Free University of Brussels (VUB/ULB) - European Center for Advanced Research in Economics and Statistics (ECARES), Ave. Franklin D Roosevelt, 50 - C.P. 114, B-1050 Brussels, Belgium.); Lucrezia Reichlin (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.)
    Abstract: This paper considers quasi-maximum likelihood estimations of a dynamic approximate factor model when the panel of time series is large. Maximum likelihood is analyzed under different sources of misspecification: omitted serial correlation of the observations and cross-sectional correlation of the idiosyncratic components. It is shown that the effects of misspecification on the estimation of the common factors is negligible for large sample size (T) and the cross-sectional dimension (n). The estimator is feasible when n is large and easily implementable using the Kalman smoother and the EM algorithm as in traditional factor analysis. Simulation results illustrate what are the empirical conditions in which we can expect improvement with respect to simple principle components considered by Bai (2003), Bai and Ng (2002), Forni, Hallin, Lippi, and Reichlin (2000, 2005b), Stock and Watson (2002a,b). JEL Classification: C51, C32, C33.
    Keywords: Factor Model, large cross-sections, Quasi Maximum Likelihood.
    Date: 2006–09
  11. By: Carsten Trenkler; Pentti Saikkonen; Helmut Lütkepohl
    Abstract: A test for the cointegrating rank of a vector autoregressive (VAR) process with a possible shift and broken linear trend is proposed. The break point is assumed to be known. The setup is a VAR process for cointegrated variables. The tests are not likelihood ratio tests but the deterministic terms including the broken trends are removed first by a GLS procedure and a likelihood ratio type test is applied to the adjusted series. The asymptotic null distribution of the test is derived and it is shown by a Monte Carlo experiment that the test has better small sample properties in many cases than a corresponding Gaussian likelihood ratio test for the cointegrating rank.
    Keywords: Cointegration, structural break, vector autoregressive process, error correction model
    JEL: C32
    Date: 2006–09
  12. By: Baltagi, Badi H.
    Abstract: This paper gives a brief survey of forecasting with panel data. Starting with a simple error component regression and surveying best linear unbiased prediction under various assumptions of the disturbance term. This includes various ARMA models as well as spatial autoregressive models. The paper also surveys how these forecasts have been used in panal data applications, running horse races between heterogeneous and homogeneous panel data models using out of sample forecasts.
    Keywords: Forecasting, BLUP, Panel Data, Spatial Dependence, Serial Correlation
    JEL: C33
    Date: 2006
  13. By: Lewis, Kurt F.; Whiteman, Charles H.
    Abstract: The track record of a sixteen-year history of density forecasts of state tax revenue in Iowa is studied, and potential improvements sought through a search for better performing “priors” similar to that conducted two decades ago for point forecasts by Doan, Litterman, and Sims (Econometric Reviews, 1984). Comparisons of the point- and density-forecasts produced under the flat prior are made to those produced by the traditional (mixed estimation) “Bayesian VAR” methods of Doan, Litterman, and Sims, as well as to fully Bayesian, “Minnesota Prior” forecasts. The actual record, and to a somewhat lesser extent, the record of the alternative procedures studied in pseudo-real-time forecasting experiments, share a characteristic: subsequently realized revenues are in the lower tails of the predicted distributions “too often”. An alternative empirically-based prior is found by working directly on the probability distribution for the VAR parameters, seeking a betterperforming entropically tilted prior that minimizes in-sample mean-squared-error subject to a Kullback-Leibler divergence constraint that the new prior not differ “too much” from the original. We also study the closely related topic of robust prediction appropriate for situations of ambiguity. Robust “priors” are competitive in out-of-sample forecasting; despite the freedom afforded the entropically tilted prior, it does not perform better than the simple alternatives.
    Date: 2006

This nep-ets issue is ©2006 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.