nep-ets New Economics Papers
on Econometric Time Series
Issue of 2007‒04‒09
eighteen papers chosen by
Yong Yin
SUNY at Buffalo

  1. The weak instrument problem of the system GMM estimator in dynamic panel data models By Maurice Bun; Frank Windmeijer
  2. Modelling and Tesint for Structural Changes in Panel Cointegration Models with Common and Idiosyncratic Stochastic Trend By Chihwa Kao; Lorenzo Trapani; Giovanni Urga
  3. Using the Dynamic Bi-Factor Model with Markov Switching to Predict the Cyclical Turns in the Large European Economies By Konstantin A. Kholodilin
  4. Nowcasting GDP and Inflation: The Real-Time Informational Content of Macroeconomic Data Releases By Domenico Giannone; Lucrezia Reichlin; David H Small
  5. A real-time analysis of the Swiss trade account By Jan Jacobs; Jan-Egbert Sturm
  6. A State Space Approach To The Policymaker's Data Uncertainty Problem By Alastair Cunningham; Chris Jeffery; George Kapetanios; Vincent Labhard
  7. Forecasting Exchange Rate Volatility with High Frequency Data: Is the Euro Different? By Georgios Chortareas; John Nankervis; Ying Jiang
  8. The Law of One Price: Nonlinearities in Sectoral Real Exchange Rate Dynamics By Luciana Juvenal; Mark P. Taylor
  9. Long Memory and FIGARCH Models for Daily and High Frequency Commodity Prices By Richard T. Baillie; Young-Wook Han; Robert J. Myers; Jeongseok Song
  10. A (semi-)parametric functional coefficient autoregressive conditional duration model By Marcelo Fernandes; Marcelo Cunha Medeiros; Alvaro Veiga
  11. "Multivariate stochastic volatility" By Siddhartha Chib; Yasuhiro Omori; Manabu Asai
  12. Learning, Forecasting and Structural Breaks By John M Maheu; Stephen Gordon
  13. Sieve bootstrap unit root tests By Patrick Richard
  14. Modeling Long-Term Memory Effect in Stock Prices: A Comparative Analysis with GPH Test and Daubechies Wavelets By Ozun, Alper; Cifter, Atilla
  15. Nonlinear Combination of Financial Forecast with Genetic Algorithm By Ozun, Alper; Cifter, Atilla
  16. The Predictive Performance of Asymmetric Normal Mixture GARCH in Risk Management: Evidence from Turkey By Cifter, Atilla; Ozun, Alper
  18. Improving Business Cycle Forecasts’ Accuracy - What Can We Learn from Past Errors? By Roland Döhrn

  1. By: Maurice Bun; Frank Windmeijer (Institute for Fiscal Studies and University of Bristol)
    Abstract: <p><p><p><p>The system GMM estimator for dynamic panel data models combines moment conditions for the model in first differences with moment conditions for the model in levels. It has been shown to improve on the GMM estimator in the first differenced model in terms of bias and root mean squared error. However, we show in this paper that in the covariance stationary panel data AR(1) model the expected values of the concentration parameters in the differenced and levels equations for the crosssection at time <i>t</i> are the same when the variances of the individual heterogeneity and idiosyncratic errors are the same. This indicates a weak instrument problem also for the equation in levels. We show that the 2SLS biases relative to that of the OLS biases are then similar for the equations in differences and levels, as are the size distortions of the Wald tests. These results are shown in a Monte Carlo study to extend to the panel data system GMM estimator.</p></p>
    Keywords: Dynamic panel data, system GMM, weak instruments
    JEL: C12 C13 C23
    Date: 2007–03
  2. By: Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Lorenzo Trapani; Giovanni Urga
    Abstract: In this paper, we propose an estimation and testing framework for parameter instability in cointegrated panel regressions with common and idiosyncratic trends. We develop tests for structural change for the slope parameters under the null hypothesis of no structural break against the alternative hypothesis of (at least) one common change point, which is possibly unknown. The limiting distributions of the proposed test statistics are derived. Monte Carlo simulations examine size and power of the proposed tests. We are grateful for discussions with Robert De Jong, Long-Fei Lee, Zongwu Cai, and Yupin Hu. We would also like to thank participants in the International Conferences on "Common Features in London" (Cass, 16-17 December 2004), 2006 New York Econometrics Camp and Breaks and Persistence in Econometrics (Cass, 11-12 December 2006), and econometrics seminars at Ohio State University and Academia Sinica for helpful comments. Part of this work was done while Chihwa Kao was visiting the Centre for Econometric Analysis at Cass (CEA@Cass). Financial support from City University 2005 Pump Priming Fund and CEA@Cass is gratefully acknowledged. Lorenzo Trapani acknowledges financial support from Cass Business School under the RAE Development Fund Scheme.
    Keywords: Panel cointegration, common and idiosyncratic stochastic trends, testing for structural changes.
    JEL: C32 C33 C13
    Date: 2007–03
  3. By: Konstantin A. Kholodilin (DIW Berlin)
    Abstract: The appropriately selected leading indicators can substantially improve the forecasting of the peaks and troughs of the business cycle. Using the novel methodology of the dynamic bi-factor model with Markov switching and the data for three largest European economies (France, Germany, and UK) we construct composite leading indicator (CLI) and composite coincident indicator (CCI) as well as corresponding recession probabilities. We estimate also a rival model of the Markov-switching VAR in order to see, which of the two models brings better outcomes. The recession dates derived from these models are compared to three reference chronologies: those of OECD and ECRI (growth cycles) and those obtained with quarterly Bry-Boschan procedure (classical cycles). Dynamic bi-factor model and MSVAR appear to predict the cyclical turning points equally well without systematic superiority of one model over another
    Keywords: Forecasting turning points, composite
    JEL: E32 C10
    Date: 2007–02–02
  4. By: Domenico Giannone (ECARES Université Libre de Bruxelles); Lucrezia Reichlin (European Central Bank); David H Small (Federal Reserve Board)
    Abstract: This paper formalizes the process of updating the nowcast and forecast on out-put and inflation as new releases of data become available. The marginal contribution of a particular release for the value of the signal and its precision is evaluated by computing "news" on the basis of an evolving conditioning information set. The marginal contribution is then split into what is due to timeliness of information and what is due to economic content. We find that the Federal Reserve Bank of Philadelphia surveys have a large marginal impact on the nowcast of both inflation variables and real variables and this effect is larger than that of the Employment Report. When we control for timeliness of the releases, the effect of hard data becomes sizeable. Prices and quantities affect the precision of the estimates of inflation while GDP is only affected by real variables and interest rates
    JEL: E52 C33 C53
    Date: 2007–02–02
  5. By: Jan Jacobs (Faculty of Economics University of Groningen); Jan-Egbert Sturm (KOF, ETH Zurich, Switzerland and CESifo, Munich, Germany)
    Abstract: First estimates of trade account statistics attract quite some attention in the media as they contain substantial information on recent economic developments. It is well known, however, that subsequent revisions of in particular this series can sometimes have substantial consequences for ex post evaluations of the economy. As a small open economy, Swiss overall growth as measured by its GDP is particularly prone for these revisions. This paper sets up a real-time dataset which is then used to analyze to what extent the first release of current account data (as compared to its revision) contains a structural bias and/or can be improved upon by the use of survey results as gathered by KOF at the ETH Zurich. If this is the case, this would allow for improvements in its future first release and thereby enhance the current assessment of the Swiss economy
    Keywords: current account statistics, real-time analysis, data revision
    JEL: C22 C53 C82
    Date: 2007–02–02
  6. By: Alastair Cunningham (Bank of England); Chris Jeffery (Bank of England); George Kapetanios (Queen Mary and WestÂ…eld College and Bank of England); Vincent Labhard (European Central Bank)
    Abstract: The paper describes the challenges that uncertainty over the true value of key macroeconomic variables poses for policymakers and the way in which they may form and update their priors in light of a range of indicators. Speci…cally, it casts the data uncertainty challenge in state space form and illustrates - in this setting - how the policymaker’s data uncertainty problem is related to any constraints that an optimising statistical agency might face in resolving its own data uncertainty challenge. The paper uses this intuition to motivate a set of identifying assumptions that might be used in the practical application of the Kalman Filter to form and update priors on the basis of a variety of indicators. In doing so, it moves beyond the simple methodology for deriving "best guesses" of the true value of economic variables outlined in Ashley, Driver, Hayes, and Je¤ery (2005)
    Date: 2007–02–02
  7. By: Georgios Chortareas (University of Essex); John Nankervis (University of Essex); Ying Jiang (University of Essex)
    Abstract: This paper focuses on forecasting volatility of high frequency Euro exchange rates. Four 15 minute frequency Euro exchange rate series, including Euro/CHF, Euro/GBP, Euro/JPY and Euro/USD, are used to test the forecast performance of six models, including both traditional time series volatility models and the realized volatility model. Besides the normally used regression test and accuracy test, an equal accuracy test, the HLN-DM test, and a superior predictive ability test are also employed in the out-of-sample forecast evaluation. The FIGARCH model is found to be superior in almost all exchange rate series. Although the widely preferred ARFIMA model shows better performance than the traditional daily volatility models, generally speaking, it cannot surpass the FIGARCH model and the intraday GARCH model. Furthermore, the SVX model does not significantly outperform the SV model in the accuracy test, which contradicts the results of some earlier research. The paper confirms the advantage of using high frequency data and modelling the long memory factor. It also analyses the characteristics of Euro exchange rates and compares the test results with the conclusions drawn by previous studies
    Keywords: exchange rates, volatility, euro, high frequency
    JEL: F31 C22
    Date: 2007–02–02
  8. By: Luciana Juvenal (University of Warwick); Mark P. Taylor (University of Warwick, Centre for Economic Policy Research)
    Abstract: Using Self-Exciting Threshold Autoregressive Models (SETAR), this paper explores the validity of the Law of One Price (LOOP) for nineteen sectors in ten European countries. We find strong evidence of nonlinear mean reversion in deviations from the LOOP. We highlight the importance of modelling the real exchange rate in a nonlinear fashion in an attempt to solve the PPP Puzzle. Using the US dollar as a reference currency, half-life estimates range from nine to sixteen months (country averages), which are significantly lower than the `consensus estimates' of three to five years. The results also show that transaction costs differ enormously across sectors and countries
    Keywords: Law of One Price, mean reversion, nonlinearities, thresholds
    JEL: F31 F41 C22
    Date: 2007–02–02
  9. By: Richard T. Baillie (Michigan State University and Queen Mary, University of London); Young-Wook Han (Hallym University, Chunchon); Robert J. Myers (Michigan State University); Jeongseok Song (Chung-Ang University, Seoul)
    Abstract: Daily futures returns on six important commodities are found to be well described as FIGARCH fractionally integrated volatility processes, with small departures from the martingale in mean property. The paper also analyzes several years of high frequency intra day commodity futures returns and finds very similar long memory in volatility features at this higher frequency level. Semi parametric Local Whittle estimation of the long memory parameter supports the conclusions. Estimating the long memory parameter across many different data sampling frequencies provides consistent estimates of the long memory parameter, suggesting that the series are self-similar. The results have important implications for future empirical work using commodity price and returns data.
    Keywords: Commodity returns, Futures markets, Long memory, FIGARCH
    JEL: C4 C22
    Date: 2007–04
  10. By: Marcelo Fernandes (Economics Department, Queen Mary, University of London); Marcelo Cunha Medeiros (Department of Economics PUC-Rio); Alvaro Veiga (Department of Economics,PUC-Rio)
    Abstract: In this paper, we propose a class of ACD-type models that accommodates overdispersion, intermittent dynamics, multiple regimes, and sign and size asymmetries in financial durations. In particular, our functional coefficient autoregressive conditional duration (FC-ACD) model relies on a smooth-transition autoregressive specification. The motivation lies on the fact that the latter yields a universal approximation if one lets the number of regimes grows without bound. After establishing that the sufficient conditions for strict stationarity do not exclude explosive regimes, we address model identifiability as well as the existence, consistency, and asymptotic normality of the quasi-maximum likelihood (QML) estimator for the FC-ACD model with a fixed number of regimes. In addition, we also discuss how to consistently estimate using a sieve approach a semiparametric variant of the FC-ACD model that takes the number of regimes to infinity. An empirical illustration indicates that our functional coefficient model is flexible enough to model IBM price durations.
    Keywords: explosive regimes, quasi-maximum likelihood, sieve estimation, smooth transition, stationarity.
    JEL: C22 C41
    Date: 2006–12
  11. By: Siddhartha Chib (Olin School of Business, Washington Unviersity in St.Louis); Yasuhiro Omori (Faculty of Economics, University of Tokyo); Manabu Asai (Faculty of Economics, Soka University)
    Abstract: The success of univariate stochastic volatility (SV) models in relation to univariate GARCH models has spurred an enormous interest in generalizations of SV models to a multivariate setting. A large number of multivariate SV (MSV) models are now available along with clearly articulated estimation recipes. Our goal in this paper is to provide the first detailed summary of the various model formulations, along with connections and differences, and discuss how the models are estimated. We aim to show that the developments and achievements in this area represent one of the great success stories of financial econometrics.
    Date: 2007–04
  12. By: John M Maheu; Stephen Gordon
    Abstract: We provide a general methodology for forecasting in the presence of structural breaks induced by unpredictable changes to model parameters. Bayesian methods of learning and model comparison are used to derive a predictive density that takes into account the possibility that a break will occur before the next observation. Estimates for the posterior distribution of the most recent break are generated as a by-product of our procedure. We discuss the importance of using priors that accurately reflect the econometrician's opinions as to what constitutes a plausible forecast. Several applications to macroeconomic time-series data demonstrate the usefulness of our procedure.
    Keywords: Bayesian Model Averaging, Markov Chain Monte Carlo, Real GDP Growth, Phillip's Curve
    JEL: C53 C22 C11
    Date: 2007–03–30
  13. By: Patrick Richard (GREDI, Département d'économique, Université de Sherbrooke)
    Abstract: We study the use of the sieve bootstrap to conduct ADF unit root tests when the time series' first difference is a general linear process that admits an infinite moving average form. The work of Park (2002) and Chang and Park (2003) suggest that the usual autoregressive (AR) sieve bootstrap provides some accuracy gains under the null hypothesis. The magnitude of this amelioration, however, depends on the nature of the true DGP. For example, the AR sieve test over-rejects almost as much as the asymptotic one when the DGP contains a strong negative moving average root. This lack of robustness is, of course, caused by the poor quality of the AR approximation. We attempt to reduce this problem by proposing to use sieve bootstraps based on moving average (MA) and autoregressive-moving average (ARMA) approximations. Though this is a natural generalisation of the standard AR sieve bootstrap, it has never been suggested in the econometrics literature. Two important theoretical results are shown. First, we establish invariance principles for the partial sum processes built from invertible MA and stationary and invertible ARMA sieve bootstrap DGPs. Second, these are used to provide a proof of the asymptotic validity of the resulting ADF bootstrap tests. Through Monte Carlo experiments, we find that the rejection probability of the MA sieve bootstrap is more robust to the DGP than that of the AR sieve bootstrap. We also find that the ARMA sieve bootstrap requires only a very parsimonious specification to achieve excellent results.
    Keywords: Sieve bootstrap, Unit root, ADF tests, ARMA approximations, Invariance Principle
    JEL: C12 C22
    Date: 2007
  14. By: Ozun, Alper; Cifter, Atilla
    Abstract: Long-term memory effect in stock prices might be captured, if any, with alternative models. Though Geweke and Porter-Hudak (1983) test model the long memory with the OLS estimator, a new approach based on wavelets analysis provide WOLS estimator for the memory effect. This article examines the long-term memory of the Istanbul Stock Index with the Daubechies-20, Daubechies-12, the Daubechies-4 and the Haar wavelets and compares the results of the WOLS estimators with that of OLS estimator based on the Geweke and Porter-Hudak test. While the results of the GPH test imply that the stock returns are memoryless, fractional integration parameters based on the Daubechies wavelets display that there is an explicit long-memory effect in the stock returns. The research results have both methodological and practical crucial conclusions. On the theoretical side, the wavelet based OLS estimator is superior in modeling the behaviours of the stock returns in emerging markets where nonlinearities and high volatility exist due to their chaotic natures. For practical aims, on the other hand, the results show that the Istanbul Stock Exchange is not in the weak-form efficient because the prices have memories that are not reflected in the prices, yet.
    Keywords: Long-term memory; Wavelets; Stock prices; GPH test
    JEL: C45 G12
    Date: 2007–02–01
  15. By: Ozun, Alper; Cifter, Atilla
    Abstract: Complexity in the financial markets requires intelligent forecasting models for return volatility. In this paper, historical simulation, GARCH, GARCH with skewed student-t distribution and asymmetric normal mixture GRJ-GARCH models are combined with Extreme Value Theory Hill by using artificial neural networks with genetic algorithm as the combination platform. By employing daily closing values of the Istanbul Stock Exchange from 01/10/1996 to 11/07/2006, Kupiec and Christoffersen tests as the back-testing mechanisms are performed for forecast comparison of the models. Empirical findings show that the fat-tails are more properly captured by the combination of GARCH with skewed student-t distribution and Extreme Value Theory Hill. Modeling return volatility in the emerging markets needs “intelligent” combinations of Value-at-Risk models to capture the extreme movements in the markets rather than individual model forecast.
    Keywords: Forecast combination; Artificial neural networks; GARCH models; Extreme value theory; Christoffersen test
    JEL: G0 C52 C32
    Date: 2007–02–01
  16. By: Cifter, Atilla; Ozun, Alper
    Abstract: The purpose of this study is to test predictive performance of Asymmetric Normal Mixture Garch (NMAGARCH) and other Garch models based on Kupiec and Christoffersen tests for Turkish equity market. The empirical results show that the NMAGARCH perform better based on %99 CI out-of-sample forecasting Christoffersen test where Garch with normal and student-t distribution perform better based on %95 Cl out-of-sample forecasting Christoffersen test and Kupiec test. These results show that none of the model including NMAGARCH outperforms other models in all cases as trading position or confidence intervals and these results shows that volatility model should be chosen according to confidence interval and trading positions. Besides, NMAGARCH increases predictive performance for higher confidence internal as Basel requires.
    Keywords: Garch; Asymmetric Normal Mixture Garch; Kupiec Test; Christoffersen Test; Emerging markets
    JEL: G00 C52 C32
    Date: 2007–01–01
  17. By: Chin Nam Low; Heather Anderson; Ralph Snyder
    Abstract: This paper considers Beveridge-Nelson decomposition in a context where the permanent and transitory components both follow a Markov switching process. Our approach insorporates Markov switching into a single source of error state-space framework, allowing business cycle asymmetries and regime switches in the long-run multiplier.
    JEL: C22 C51 E32
    Date: 2006–07
  18. By: Roland Döhrn
    Abstract: This paper addresses the question whether forecasters could have been able to produce better forecasts by using the available information more efficiently (informational efficiency of forecast). It is tested whether forecast errors covariate with indicators such as survey results, monetary data, business cycle indicators, or financial data. Because of the short sampling period and data problems, a non parametric ranked sign test is applied. The analysis is carried out for GDP and its main components. The study differentiates between two types of errors: Type I error occurs when forecasters neglect the information provided by an indicator.As type II error a situation is labelled in which forecasters have given too much weight to an indicator. In a number of cases forecast errors and the indicators are correlated, though mostly at a rather low level of significance. In most cases type I errors have been found. Additional tests reveal that there is little evidence of institution specific as well as forecast horizon specific effects. In many cases, co-variations found for GDP are not refected in one of the expenditure side components et vice versa.
    Keywords: Short term forecast, Forecast evaluation, informational efficiency
    JEL: E37 C53 C42
    Date: 2006–10

This nep-ets issue is ©2007 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.