|
on Forecasting |
By: | Ralph D. Snyder; Adrian Beaumont |
Abstract: | This paper has a focus on non-stationary time series formed from small non-negative integer values which may contain many zeros and may be over-dispersed. It describes a study undertaken to compare various suitable adaptations of the simple exponential smoothing method of forecasting on a database of demand series for slow moving car parts. The methods considered include simple exponential smoothing with Poisson measurements, a finite sample version of simple exponential smoothing with negative binomial measurements, and the Croston method of forecasting. In the case of the Croston method, a maximum likelihood approach to estimating key quantities, such as the smoothing parameter, is proposed for the first time. The results from the study indicate that the Croston method does not forecast, on average, as well as the other two methods. It is also confirmed that a common fixed smoothing constant across all the car parts works better than maximum likelihood approaches. |
Keywords: | Count time series; forecasting; exponential smoothing; Poisson distribution; negative binomial distribution; Croston method. |
Date: | 2007–12 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2007-15&r=for |
By: | Turgut Kisinbay |
Abstract: | The paper proposes an algorithm that uses forecast encompassing tests for combining forecasts. The algorithm excludes a forecast from the combination if it is encompassed by another forecast. To assess the usefulness of this approach, an extensive empirical analysis is undertaken using a U.S. macroecoomic data set. The results are encouraging as the algorithm forecasts outperform benchmark model forecasts, in a mean square error (MSE) sense, in a majority of cases. |
Keywords: | Forecasting models , Economic forecasting , |
Date: | 2007–11–21 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:07/264&r=for |
By: | Carlo Altavilla (University of Naples “Parthenope”, Via Medina, 40 - 80133 Naples, Italy.); Matteo Ciccarelli (Corresponding author: European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.) |
Abstract: | This paper explores the role of model and vintage combination in forecasting, with a novel approach that exploits the information contained in the revision history of a given variable. We analyse the forecast performance of eleven widely used models to predict inflation and GDP growth, in the three dimensions of accuracy, uncertainty and stability by using the real-time data set for macroeconomists developed at the Federal Reserve Bank of Philadelphia. Instead of following the common practice of investigating only the relationship between first available and fully revised data, we analyse the entire revision history for each variable and extract a signal from the entire distribution of vintages of a given variable to improve forecast accuracy and precision. The novelty of our study relies on the interpretation of the vintages of a real time data base as related realizations or units of a panel data set. The results suggest that imposing appropriate weights on competing models of inflation forecasts and output growth — reflecting the relative ability each model has over different sub-sample periods — substantially increases the forecast performance. More interestingly, our results indicate that augmenting the information set with a signal extracted from all available vintages of time-series consistently leads to a substantial improvement in forecast accuracy, precision and stability. JEL Classification: C32, C33, C53. |
Keywords: | Real-time data, forecast combination, data and model uncertainty. |
Date: | 2007–12 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20070846&r=for |
By: | Abdou Kâ Diongue (Université Gaston Berger de Saint-Louis); Dominique Guégan (Centre d'Economie de la Sorbonne); Bertrand Vignal (EDF) |
Abstract: | In this article, we investigate conditional mean and variance forecasts using a dynamic model following a k-factor GIGARCH process. We are particularly interested in calculating the conditional variance of the prediction error. We apply this method to electricity prices and test spot prices forecasts until one month ahead forecast. We conclude that the k-factor GIGARCH process is a suitable tool to forecast spot prices, using the classical RMSE criteria. |
Keywords: | Conditional mean, conditional variance, forecast, electricity prices, GIGARCH process. |
JEL: | C53 |
Date: | 2007–11 |
URL: | http://d.repec.org/n?u=RePEc:mse:cesdoc:b07058&r=for |
By: | D'Agostino, Antonello; Giannone, Domenico; Surico, Paolo |
Abstract: | The ability of popular statistical methods, the Federal Reserve Greenbook and the Survey of Professional Forecasters to improve upon the forecasts of inflation and real activity from naive models has declined significantly during the most recent period of greater macroeconomic stability. The decline in the predictability of inflation is associated with a break down in the predictive power of real activity, especially in the housing sector. The decline in the predictability of real activity is associated with a break down in the predictive power of the term spread. |
Keywords: | Fed Greenbook; forecasting models; predictability; Survey of Professional Forecasts |
JEL: | C22 C53 E37 E47 |
Date: | 2007–12 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:6594&r=for |
By: | Berger, Helge (Department of Economics, Free University Berlin); Österholm, Pär (Department of Economics) |
Abstract: | We use a mean-adjusted Bayesian VAR model as an out-of-sample forecasting tool to test whether money growth Granger-causes inflation in the euro area. Based on data from 1970 to 2006 and forecasting horizons of up to 12 quarters, there is surprisingly strong evidence that including money improves forecasting accuracy. The results are very robust with regard to alternative treatments of priors and sample periods. That said, there is also reason not to overemphasize the role of money. The predictive power of money growth for inflation is substantially lower in more recent sample periods compared to the 1970s and 1980s. This cautions against using money-based inflation models anchored in very long samples for policy advice. |
Keywords: | Granger Causality; Monetary Aggregates; Monetary Policy; European Central Bank |
JEL: | E47 E52 E58 |
Date: | 2007–12–17 |
URL: | http://d.repec.org/n?u=RePEc:hhs:uunewp:2007_030&r=for |
By: | Caio Almeida; Romeu Gomes; André Leite; José Vicente |
Abstract: | In this paper, we analyze the importance of curvature term structure movements on forecasts of interest rate means. An extension of the exponential three-factor Diebold and Li (2006) model is proposed, where a fourth factor captures a second type of curvature. The new factor increases model ability to generate more volatile and non-linear yield curves, leading to a significant improvement of forecasting ability, in special for short-term maturities. A forecasting experiment adopting Brazilian term structure data on Interbank Deposits (IDs) generates statistically significant lower bias and Root Mean Square Errors (RMSE) for the double curvature model, for most examined maturities, under three different forecasting horizons. Consistent with recent empirical analysis of bond risk premium, when a second curvature is included, despite explaining only a small portion of interest rate variability, it changes the structure of model risk premium leading to better predictions of bond excess returns. |
Date: | 2007–12 |
URL: | http://d.repec.org/n?u=RePEc:bcb:wpaper:155&r=for |
By: | Tobias Knedlik; Rolf Scheufele |
Abstract: | In this paper we test the ability of three of the most popular methods to forecast the South African currency crisis of June 2006. In particular we are interested in the out-ofsample performance of these methods. Thus, we choose the latest crisis to conduct an out-of-sample experiment. In sum, the signals approach was not able to forecast the outof- sample crisis of correctly; the probit approach was able to predict the crisis but just with models, that were based on raw data. Employing a Markov-regime-switching approach also allows to predict the out-of-sample crisis. The answer to the question of which method made the run in forecasting the June 2006 currency crisis is: the Markovswitching approach, since it called most of the pre-crisis periods correctly. However, the “victory” is not straightforward. In-sample, the probit models perform remarkably well and it is also able to detect, at least to some extent, out-of-sample currency crises before their occurrence. It can, therefore, not be recommended to focus on one approach only when evaluating the risk for currency crises. |
Date: | 2007–12 |
URL: | http://d.repec.org/n?u=RePEc:iwh:dispap:17-07&r=for |
By: | Michael Woodford |
Abstract: | Forecast targeting is an innovation in central banking that represents an important step toward more rule-based policymaking, even if it is not an attempt to follow a policy rule of any of the types that have received primary attention in the theoretical literature on optimal monetary policy. This paper discusses the extent to which forecast targeting can be considered an example of a policy rule, and the conditions under which it would represent a desirable rule, with a view to suggesting improvements in the approaches currently used by forecast-targeting central banks. Particular attention is given to the intertemporal consistency of forecast-targeting procedures, the assumptions about future policy that should be used in constructing the forecasts used in such procedures, the horizon with which the target criterion should be concerned, the relevance of forecasts other than the inflation forecast, and the degree of robustness of a desirable target criterion for monetary policy to changing circumstances. |
JEL: | E52 E58 |
Date: | 2007–12 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:13716&r=for |
By: | Juan Angel García (Capital Markets and Financial Structure Division, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Andrés Manzanares (Risk Management Division, European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.) |
Abstract: | Using data from the ECB's Survey of Professional Forecasters, we investigate the reporting practices of survey participants by comparing their point predictions and the mean/median/mode of their probability forecasts. We find that the individual point predictions, on average, tend to be biased towards favourable outcomes: they suggest too high growth and too low inflation rates. Most importantly, for each survey round, the aggregate survey results based on the average of the individual point predictions are also biased. These findings cast doubt on combined survey measures that average individual point predictions. Survey results based on probability forecasts are more reliable. JEL Classification: C42, E31, E47. |
Keywords: | Point estimates, subjective probability distributions, Survey of Professional Forecasters (SPF), survey methods. |
Date: | 2007–12 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20070836&r=for |