|
on Forecasting |
By: | Katja Drechsel (University of Osnabrück, International Economic Policy, Rolandstrasse 8, D-49069 Osnabrück, Germany); Laurent Maurin (Corresponding author: European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany) |
Abstract: | Euro area GDP and components are nowcast and forecast one quarter ahead. Based on a dataset of 163 series comprising the relevant monthly indicators, simple bridge equations with one explanatory variable are estimated for each. The individual forecasts generated by each equation are then pooled, using six weighting schemes including Bayesian ones. To take into consideration the release calendar of each indicator, six forecasts are compiled independently during the quarter, each based on different information sets - different indicators, different individual equations and finally different weights to aggregate information. The information content of the various blocks of information at different points in time for each GDP component is then discussed. It appears that taking into account the information flow results in significant changes in the weight allocated to each block of information, especially when the first month of hard data becomes available. This conclusion, reached for all the components and most of the weighting scheme, supports and extends the findings of Giannone, Reichlin and Small (2006) and Banbura and Ruenstler (2007). An out-of-sample forecast comparison exercise is also carried out for each component and GDP directly. The forecast performance is found to vary widely across components. Two weighting schemes are found to outperform the equal weighting scheme in almost all cases. One-quarter ahead, the direct forecast of GDP is found to outperform the bottom-up approach. However, the nowcast resulting in the lowest forecast errors is derived from the bottom-up approach. JEL Classification: C22, C53, E17. |
Keywords: | Large dataset, forecast pooling, weighting scheme, GDP components, out-ofsample forecast performance, bottom-up vs. direct forecast. |
Date: | 2008–08 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20080925&r=for |
By: | Antonello D’Agostino (Central Bank); Karl Whelan (University College Dublin) |
Abstract: | Using data from the period 1970-1991, Romer and Romer (2000) showed that Federal Reserve forecasts of inflation and output were superior to those provided by commercial forecasters. In this paper, we show that this superior forecasting performance deterio- rated after 1991. Over the decade 1992-2001, the superior forecast accuracy of the Fed held only over a very short time horizon and was limited to its forecasts of inflation. In addition, the performance of both the Fed and the commercial forecasters in pre- dicting inflation and output, relative to that of “naive” benchmark models, dropped remarkably during this period. |
Date: | 2007–12–22 |
URL: | http://d.repec.org/n?u=RePEc:ucn:wpaper:200722&r=for |
By: | Andersson, Jonas (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration); Karlis, Dimitris (Department of Statistics, Athens University of Economics and Business) |
Abstract: | Time series models for count data have found increased interest in recent days. The existing literature refers to the case of data that have been fully observed. In the present paper, methods for estimating the parameters of the first-order integer-valued autoregressive model in the presence of missing data are proposed. The first method maximizes a conditional likelihood constructed via the observed data based on the k-step-ahead conditional distributions to account for the gaps in the data. The second approach is based on an iterative scheme where missing values are imputed in order to update the estimated parameters. The first method is useful when the predictive distributions have simple forms. We derive in full details this approach when the innovations are assumed to follow a finite mixture of Poisson distributions. The second method is applicable when there are not closed form expressions for the conditional likelihood or they are hard to derive. Simulation results and comparisons of the methods are reported. The proposed methods are applied to a data set concerning syndromic surveillance during the Athens 2004 Olympic Games. |
Keywords: | Imputation; Markov Chain EM algorithm; mixed Poisson; discrete valued time series |
JEL: | C32 |
Date: | 2008–08–13 |
URL: | http://d.repec.org/n?u=RePEc:hhs:nhhfms:2008_014&r=for |
By: | Ralf Becker; Adam Clements; Andrew McClelland |
Abstract: | Much research has investigated the differences between option implied volatilities and econometric model-based forecasts in terms of forecast accuracy and relative informational content. Implied volatility is a market determined forecast, in contrast to model-based forecasts that employ some degree of smoothing to generate forecasts. Therefore, implied volatility has the potential to reflect information that a model-based forecast could not. Specifically, this paper considers two issues relating to the informational content of the S&P 500 VIX implied volatility index. First, whether it subsumes information on how historical jump activity contributed to the price volatility, followed by whether the VIX reflects any incremental information relative to model based forecasts pertaining to future jumps. It is found that the VIX index both subsumes information relating to past jump contributions to volatility and reflects incremental information pertaining to future jump activity, relative to modelbased forecasts. This is an issue that has not been examined previously in the literature and expands our understanding of how option markets form their volatility forecasts. |
Keywords: | Implied volatility, VIX, volatility forecasts, informational efficiency, jumps |
JEL: | C12 C22 G00 G14 |
Date: | 2008–03–17 |
URL: | http://d.repec.org/n?u=RePEc:qut:auncer:2008-13&r=for |
By: | John M Maheu; Thomas H McCurdy |
Abstract: | Many finance questions require a full characterization of the distribution of returns. We propose a bivariate model of returns and realized volatility (RV), and explore which features of that time-series model contribute to superior density forecasts over horizons of 1 to 60 days out of sample. This term structure of density forecasts is used to investigate the importance of: the intraday information embodied in the daily RV estimates; the functional form for log(RV) dynamics; the timing of information availability; and the assumed distributions of both return and log(RV) innovations. We find that a joint model of returns and volatility that features two components for log(RV) provides a good fit to S&P 500 and IBM data, and is a significant improvement over an EGARCH model estimated from daily returns. |
Keywords: | RV, multiperiod, out-of-sample, term structure of density forecasts, observable SV |
JEL: | C1 C50 C32 G1 |
Date: | 2008–08–06 |
URL: | http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-324&r=for |