
on Econometric Time Series 
By:  Hilde C. Bjørnland (University of Oslo and Norges Bank (Central Bank of Norway)); Leif Brubakk (Norges Bank (Central Bank of Norway)); Anne Sofie Jore (Norges Bank (Central Bank of Norway)) 
Abstract:  The output gap (measuring the deviation of output from its potential) is a crucial concept in the monetary policy framework, indicating demand pressure that generates inflation. The output gap is also an important variable in itself, as a measure of economic fluctuations. However, its definition and estimation raise a number of theoretical and empirical questions. This paper evaluates a series of univariate and multivariate methods for extracting the output gap, and compares their value added in predicting inflation. The multivariate measures of the output gap have by far the best predictive power. This is in particular interesting, as they use information from data that are not revised in real time. We therefore compare the predictive power of alternative indicators that are less revised in real time, such as the unemployment rate and other business cycle indicators. Some of the alternative indicators do as well, or better, than the multivariate output gaps in predicting inflation. As uncertainties are particularly pronounced at the end of the calculation periods, assessment of pressures in the economy based on the uncertain output gap could benefit from being supplemented with alternative indicators that are less revised in real time. 
Keywords:  Output gap, real time indicators, forecasting, Phillips curve 
JEL:  C32 E31 E32 E37 
Date:  2006–03–17 
URL:  http://d.repec.org/n?u=RePEc:bno:worpap:2006_02&r=ets 
By:  Luc, Bauwens (UNIVERSITE CATHOLIQUE DE LOUVAIN, Center for Operations Research and Econometrics (CORE)); J.V.K., ROMBOUTS 
Abstract:  We estimate by Bayesian inference the mixed conditional heteroskedasticity model of (Haas, Mittnik and Paolelella 2004a). We construct a Gibbs sampler algorithm to compute posterior and predictive densities. The number of mixture components is selected by the marginal likelihood criterion. We apply the model to the SP500 daily returns 
Keywords:  Finite mixure; ML estimation; Bayesian inference; Value at Risk 
JEL:  C11 C15 C32 
Date:  2005–12–01 
URL:  http://d.repec.org/n?u=RePEc:ctl:louvec:2005058&r=ets 
By:  Luc, BAUWENS (UNIVERSITE CATHOLIQUE DE LOUVAIN, Center for Operations Research and Econometrics (CORE)); Arie, PREMINGER (UNIVERSITE CATHOLIQUE DE LOUVAIN, Center for Operations Research and Econometrics (CORE)); Jeroen, ROMBOUTS 
Abstract:  We develop univariate regimeswitching GARCH (RSGARCH) models wherein the conditional variance switches in time from one GARCH process to another. The switching is governed by a timevarying probability, specified as a function of past information. We provide sufficient conditions for stationarity and existence of moments. Because of path dependence, maximum likehood estimation is infeasible. By enlarging the parameter space to include the state variables, Bayesian estimation using a Gibbs sampling algorithm is feasible. We apply this model using the NASDAQ daily returns series. 
Keywords:  GARCH; regime switching; Bayesian inference 
JEL:  C11 C22 C52 
Date:  2006–02–20 
URL:  http://d.repec.org/n?u=RePEc:ctl:louvec:2006006&r=ets 
By:  Luc, BAUWENS (UNIVERSITE CATHOLIQUE DE LOUVAIN, Center for Operations Research and Econometrics (CORE)); C.M., HAFNER; J.V.K., ROMBOUTS 
Abstract:  We propose a new multivariate volatility model where the conditional distribution of a vector time series is given by a mixture of multivariate normal distributions. Each of these distributions is allowed to have a timevarying covariance matrix. The process can be globally covariancestationary even though some components are not covariancestationary. We derive some theoretical properties of the model such as the unconditional covariance matrix and autocorrelations of squared returns. The complexity of the model requires a powerful estimation algorithm. In a simulation study we compare estimation by a maximum likelihood with the EM algorithm and Bayesian estimation with a Gibbs sampler. Finally, we apply the model to daily U.S. stock returns. 
Keywords:  Multivariate volatility; Finite mixture; EM algorithm; Bayesian inference 
JEL:  C11 C22 C52 
Date:  2006–02–20 
URL:  http://d.repec.org/n?u=RePEc:ctl:louvec:2006007&r=ets 
By:  Pesavento, Elena; Rossi, Barbara 
Abstract:  This paper is a comprehensive comparison of existing methods for constructing confidence bands for univariate impulse response functions in the presence of high persistence. Monte Carlo results show that Kilian (1998a), Wright (2000), Gospodinov (2004) and Pesavento and Rossi (2005) have favorable coverage properties, although they differ in terms of robustness at various horizons, median unbiasedness, and reliability in the possible presence of a unit or mildly explosive root. On the other hand, methods like Runkle’s (1987) bootstrap, Andrews and Chen (1994), and regressions in levels or first differences (even when based on pretests) may not have accurate coverage properties. The paper makes recommendations as to the appropriateness of each method in empirical work. 
Keywords:  Local to unity asymptotics, persistence, impulse response functions 
JEL:  C1 C2 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:duk:dukeec:0603&r=ets 
By:  Ekkehart Schlicht (University of Munich and IZA Bonn); Johannes Ludsteck (Institute for Employment Research (IAB)) 
Abstract:  This papers describes an estimator for a standard statespace model with coefficients generated by a random walk that is statistically superior to the Kalman filter as applied to this particular class of models. Two closely related estimators for the variances are introduced: A maximum likelihood estimator and a moments estimator that builds on the idea that some moments are equalized to their expectations. These estimators perform quite similar in many cases. In some cases, however, the moments estimator is preferable both to the proposed likelihood estimator and the Kalman filter, as implemented in the program package Eviews. 
Keywords:  timevarying coefficients, adaptive estimation, Kalman filter, statespace 
JEL:  C2 C22 C51 C52 
Date:  2006–03 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp2031&r=ets 
By:  Crescenzio Gallo; Giancarlo De Stasio; Cristina Di Letizia 
Abstract:  The study of Artificial Neural Networks derives from first trials to translate in mathematical models the principles of biological “processing”. An Artificial Neural Network deals with generating, in the fastest times, an implicit and predictive model of the evolution of a system. In particular, it derives from experience its ability to be able to recognize some behaviours or situations and to “suggest” how to take them into account. This work illustrates an approach to the use of Artificial Neural Networks for Financial Modelling; we aim to explore the structural differences (and implications) between one and multi agent and population models. In onepopulation models, ANNs are involved as forecasting devices with wealthmaximizing agents (in which agents make decisions so as to achieve an utility maximization following nonlinear models to do forecasting), while in multipopulation models agents do not follow predetermined rules, but tend to create their own behavioural rules as market data are collected. In particular, it is important to analyze diversities between oneagent and onepopulation models; in fact, in building onepopulation model it is possible to illustrate the market equilibrium endogenously, which is not possible in oneagent model where all the environmental characteristics are taken as given and beyond the control of the single agent. 
Keywords:  artificial neural network, financial modelling, population model, market equilibrium. 
JEL:  C53 C69 C90 D58 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:ufg:qdsems:022006&r=ets 
By:  Juan Carlos Escanciano 
Abstract:  This article proposes a general class of joint diagnostic tests for parametric conditional mean and variance models of possibly nonlinear and/or nonMarkovian time series sequences. The new tests are based on a generalized spectral approach and, contrary to existing procedures, they do not need to choose a lag order depending on the sample size or to smooth the data. Moreover, they are robust to higher order dependence of unknown form. It turns out that the asymptotic null distributions of the new tests depend on the data generating process, so a bootstrap procedure is proposed and theoretically justified. A simulation study compares the finite sample performance of the proposed and competing tests and shows that our tests can play a valuable role in time series modelling. An application to the S&P500 highlights the merits of our approach. 
JEL:  C12 C14 C52 
URL:  http://d.repec.org/n?u=RePEc:una:unccee:wpwp0206&r=ets 
By:  David F. Hendry (Department of Economics, Oxford University, Manor Road Building, Manor Road, Oxford, OX1 3UQ, United Kingdom); Kirstin Hubrich (European Central Bank, Kaiserstrasse 29, Postfach 16 03 19, 60066 Frankfurt am Main, Germany.) 
Abstract:  We suggest an alternative use of disaggregate information to forecast the aggregate variable of interest, that is to include disaggregate information or disaggregate variables in the aggregate model as opposed to first forecasting the disaggregate variables separately and then aggregating those forecasts or, alternatively, using only lagged aggregate information in forecasting the aggregate. We show theoretically that the first method of forecasting the aggregate should outperform the alternative methods in population. We investigate whether this theoretical prediction can explain our empirical findings and analyse why forecasting the aggregate using information on its disaggregate components improves forecast accuracy of the aggregate forecast of euro area and US inflation in some situations, but not in others. 
Keywords:  Disaggregate information; predictability; forecast model selection; VAR; factor models 
JEL:  C51 C53 E31 
Date:  2006–02 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20060589&r=ets 
By:  Anindya Banerjee (European University Institute, Department of Economics, Villa San Paolo, Via della Piazzuola 43, 50133 Florence, Italy.); Josep Lluís (University of Barcelona, Department of Econometrics, Statistics and Spanish Economy, Av. Diagonal 690, 08034 Barcelona, Spain.) 
Abstract:  The power of standard panel cointegration statistics may be affected by misspecification errors if proper account is not taken of the presence of structural breaks in the data. We propose modifications to allow for one structural break when testing the null hypothesis of no cointegration that retain good properties in terms of empirical size and power. Response surfaces to approximate the finite sample moments that are required to implement the statistics are provided. Since panel cointegration statistics rely on the assumption of crosssection independence, a generalisation of the tests to the common factor framework is carried out in order to allow for dependence among the units of the panel. 
Keywords:  Panel cointegration; structural break; common factors; crosssection dependence 
JEL:  C12 C22 
Date:  2006–02 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20060591&r=ets 