|
on Forecasting |
By: | Chris Bloor; Troy Matheson (Reserve Bank of New Zealand) |
Abstract: | We develop a large Bayesian VAR (BVAR) model of the New Zealand economy that incorporates the conditional forecasting estimation techniques of Waggoner and Zha (1999). We examine the real-time forecasting performance as the size of the model increases using an unbalanced data panel. In a realtime out-of-sample forecasting exercise, we find that our BVAR methodology outperforms univariate and VAR benchmarks, and produces comparable forecast accuracy to the judgementally-adjusted forecasts produced internally at the Reserve Bank of New Zealand. We analyse forecast performance and find that, while there are trade offs across different variables, a 35 variable BVAR generally performs better than 8, 13, or 50 variable specifications for our dataset. Finally, we demonstrate techniques for imposing judgement and for forming a semi-structural interpretation of the BVAR forecasts. |
JEL: | C11 C13 C53 |
Date: | 2009–04 |
URL: | http://d.repec.org/n?u=RePEc:nzb:nzbdps:2009/02&r=for |
By: | H.O. Stekler (Department of Economics George Washington UniversityAuthor-Name: Kazuta Sakamoto); Kazuta Sakamoto (Department of Economics George WaExponential smoothing and non-negative datashington UniversityAuthor-Name: Kazuta Sakamoto) |
Abstract: | Forecasts for the current year that are made sometime during the current year are not true annual forecasts because they include already known information for the early part of the year. The current methodology that evaluates these ¡°forecasts¡± does not take into account the known information. This paper presents a methodology for calculating an implicit forecast for the latter part of a year conditional on the known information. We then apply the procedure to Japanese forecasts for 1988-2003 and analyze some of the characteristics of those predictions.Length: 24 pages |
Keywords: | Forecasting, Japanese forecasts, evaluation techniques |
JEL: | E37 |
Date: | 2008–07 |
URL: | http://d.repec.org/n?u=RePEc:gwc:wpaper:2008-005&r=for |
By: | Gonzalo Fernández-de-Córdoba (Universidad de Salamanca); José L. Torres (Universidad de Málaga) |
Abstract: | During the past ten years Dynamic Stochastic General Equilibrium (DSGE) models have become an important tool in quantitative macroeconomics. However, DSGE models was not considered as a forecasting tool until very recently. The objective of this paper is twofold. First, we compare the forecasting ability of a canonical DSGE model for the Spanish economy with other standard econometric techniques. More precisely, we compare out-of-sample forecasts coming from different estimation methods of the DSGE model to the forecasts produced by a VAR and a Bayesian VAR. Second, we propose a new method for combining DSGE and VAR models (Augmented VAR-DSGE) through the expansion of the variable space where the VAR operates with artificial series obtained from a DSGE model. The results indicate that the out-of-sample forecasting performance of the proposed method outperforms all the considered alternatives. |
Keywords: | DSGE models, forecasting, VAR, BVAR |
JEL: | C53 E32 E37 |
Date: | 2009–05 |
URL: | http://d.repec.org/n?u=RePEc:mal:wpaper:2009-1&r=for |
By: | Edward N. Gamber (Department of Economics and Business, Lafayette College); Tara M. Sinclair (Department of Economics, George Washington University); H.O. Stekler (Department of Economics, George Washington University); Elizabeth Reid (Department of Economics, George Washington University) |
Abstract: | This paper presents a new methodology to evaluate the impact of forecast errors on policy. We apply this methodology to the Federal Reserve forecasts of U.S. real output growth and the inflation rate using the Taylor (1993) monetary policy rule. Our results suggest it is possible to calculate policy forecast errors using joint predictions for a number of variables. These policy forecast errors have a direct interpretation for the impact of forecasts on policy. In the case of the Federal Reserve, we find that, on average, Fed policy based on the Taylor rule was approximately a full percentage point away from the intended target because of errors in forecasting growth and inflation. |
Keywords: | Forecast Evaluation, Federal Reserve Forecasts, Monetary Policy |
JEL: | C53 E37 E52 E58 |
Date: | 2008–04 |
URL: | http://d.repec.org/n?u=RePEc:gwc:wpaper:2008-002&r=for |
By: | Rangan Gupta (Department of Economics, University of Pretoria); Alain Kabundi (Department of Economics and Econometrics, University of Johannesburg); Stephen M. Miller (College of Business, University of Las Vegas, Nevada) |
Abstract: | We implement several Bayesian and classical models to forecast housing prices in 20 US states. In addition to standard vector-autoregressive (VAR) and Bayesian vector autoregressive (BVAR) models, we also include the information content of 308 additional quarterly series in some models. Several approaches exist for incorporating information from a large number of series. We consider two approaches – extracting common factors (principle components) in a Factor-Augmented Vector Autoregressive (FAVAR) or Factor-Augmented Bayesian Vector Autoregressive (FABVAR) models or Bayesian shrinkage in a large-scale Bayesian Vector Autoregressive (LBVAR) models. In addition, we also introduce spatial or causality priors to augment the forecasting models. Using the period of 1976:Q1 to 1994:Q4 as the in-sample period and 1995:Q1 to 2003:Q4 as the out-of-sample horizon, we compare the forecast performance of the alternative models. Based on the average root mean squared error (RMSE) for the one-, two-, three-, and four–quarters-ahead forecasts, we find that one of the factor-augmented models generally outperform the large-scale models in the 20 US states examined in this paper. |
Keywords: | Housing prices, Forecasting, Factor Augmented Models, Large-Scale BVAR models |
JEL: | C32 R31 |
Date: | 2009–05 |
URL: | http://d.repec.org/n?u=RePEc:pre:wpaper:200912&r=for |
By: | Gloria González-Rivera (Department of Economics, University of California Riverside); Tae-Hwy Lee (Department of Economics, University of California Riverside) |
Date: | 2007–09 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:200803&r=for |
By: | Edward N. Gamber (Department of Economics and Business, Lafayette College); Julie K. Smith (Department of Economics and Business,Lafayette College) |
Abstract: | We examine the relative improvement in forecasting accuracy of the Federal Reserve (Greenbook forecasts) and private-sector forecasts (the Survey of Professional Forecasters and Blue Chip Economic Indicators) for inflation. Previous research by Romer and Romer (2000), and Sims (2002) shows that the Fed is more accurate than the private sector at forecasting inflation. In a separate line of research, Atkeson and Ohanian (2001) and Stock and Watson (2007) document changes in the forecastability of inflation since the Great Moderation. These works suggest that the reduced inflation variability associated with Great Moderation was mostly due to a decline in the variability of the predictable component inflation. We hypothesize that the decline in the variability of the predictable component of inflation has evened the playing field between the Fed and private sector and therefore led to a narrowing, if not disappearance, of the Fed’s relative forecasting advantage. We find that the Fed’s forecast errors remain significantly smaller than the private sector’s but the gap has narrowed considerable since the mid-1980s, especially after 1994. |
Keywords: | forecasting inflation, Survey of Professional Forecasters, Blue Chip forecasts,Greenbook forecasts, naïve forecasts |
JEL: | E37 |
Date: | 2007–11 |
URL: | http://d.repec.org/n?u=RePEc:gwc:wpaper:2007-002&r=for |
By: | Huiyu Huang (PanAgora Asset Management); Tae-Hwy Lee (Department of Economics, University of California Riverside) |
Abstract: | When the objective is to forecast a variable of interest but with many explanatory variables available, one could possibly improve the forecast by carefully integrating them. There are generally two directions one could proceed: combination of forecasts (CF) or combination of information (CI). CF combines forecasts generated from simple models each incorporating a part of the whole information set, while CI brings the entire information set into one super model to generate an ultimate forecast. Through linear regression analysis and simulation, we show the relative merits of each, particularly the circumstances where forecast by CF can be superior to forecast by CI, when CI model is correctly specified and when it is misspecified, and shed some light on the success of equally weighted CF. In our empirical application on prediction of monthly, quarterly, and annual equity premium, we compare the CF forecasts (with various weighting schemes) to CI forecasts (with principal component approach mitigating the problem of parameter proliferation). We find that CF with (close to) equal weights is generally the best and dominates all CI schemes, while also performing substantially better than the historical mean. |
Keywords: | Equally weighted combination of forecasts, Equity premium, Factor models, Fore- cast combination, Forecast combination puzzle, Information sets, Many predictors, Principal components, Shrinkage |
JEL: | C3 C5 G0 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:200806&r=for |
By: | Tara M. Sinclair (Department of Economics The George Washington University); Fred Joutz (Department of Economics The George Washington University); Herman O. Stekler (Department of Economics The George Washington University) |
Abstract: | This paper reconciles contradictory findings obtained from forecast evaluations: the existence of systematic errors and the failure to reject rationality in the presence of such errors. Systematic errors in one economic state may offset the opposite types of errors in the other state such that the null of rationality is not rejected. A modified test applied to the Fed forecasts shows that the forecasts were ex post biased. |
Keywords: | Greenbook Forecasts, forecast evaluation, systematic errors |
JEL: | C53 E37 E52 E58 |
Date: | 2008–08 |
URL: | http://d.repec.org/n?u=RePEc:gwc:wpaper:2008-010&r=for |
By: | Herman O. Stekler (Department of Economics The George Washington University) |
Abstract: | Fildes and Stekler’s (2002) survey of the state of knowledge about the quality of economic forecasts focused primarily on US and UK data. This paper will draw on some of their findings but it will not examine any additional US forecasts. The purpose is to determine whether their results are robust by examining the predictions of other countries. The focus will be on (1) directional errors, (2) the magnitude of the errors made in estimating growth and inflation, (3) whether there were biases and systematic errors, (4) the sources of the errors and (5) whether there has been an improvement in forecasting ability. |
Keywords: | G7 forecasts, evaluation techniques |
JEL: | E37 |
Date: | 2008–08 |
URL: | http://d.repec.org/n?u=RePEc:gwc:wpaper:2008-9&r=for |
By: | Julien CHEVALLIER; Benoît SEVI |
Abstract: | The recent implementation of the EU Emissions Trading Scheme (EU ETS) in January 2005 created new financial risks for emitting firms. To deal with these risks, options are traded since October 2006. Because the EU ETS is a new market, the relevant underlying model for option pricing is still a controversial issue. This article improves our understanding of this issue by characterizing the conditional and unconditional distributions of the realized volatility for the 2008 futures contract in the European Climate Exchange (ECX), which is valid during Phase II (2008-2012) of the EU ETS. The realized volatility measures from naive, kernel-based and subsampling estimators are used to obtain inferences about the distributional and dynamic properties of the ECX emissions futures volatility. The distribution of the daily realized volatility in logarithmic form is shown to be close to normal. The mixture-of-distributions hypothesis is strongly rejected, as the returns standardized using daily measures of volatility clearly departs from normality. A simplified HAR-RV model (Corsi, 2009) with only a weekly component, which reproduces long memory properties of the series, is then used to model the volatility dynamics. Finally, the predictive accuracy of the HAR-RV model is tested against GARCH specifications using one-step-ahead forecasts, which confirms the HAR-RV superior ability. Our conclusions indicate that (i) the standard Brownian motion is not an adequate tool for option pricing in the EU ETS, and (ii) a jump component should be included in the stochastic process to price options, thus providing more efficient tools for risk-management activities. |
Keywords: | CO2 price, realized volatility, HAR-RV, GARCH, futures trading, emissions markets, EU ETS, intraday data, forecasting |
JEL: | C5 G1 Q4 |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:mop:credwp:09.05.84&r=for |
By: | Sandra Eickmeier; Tim Ng (Reserve Bank of New Zealand) |
Abstract: | This paper examines the relationship between wages and consumer prices in New Zealand over the last 15 years. Reflecting the open nature of the New Zealand economy, the headline CPI is disaggregated into non-tradable and tradable prices. We find that there is a joint causality between wages and disaggregate inflation. An increase in wage inflation forecasts an increase in non-tradable inflation. However, it is tradable inflation that drives wage inflation. While exogenous shocks to wages do not help to forecast inflation, the leading relationship from wages to non-tradable inflation implies that monitoring wages may prove useful for projecting the impact of other shocks on future inflation. |
JEL: | C53 F47 C33 |
Date: | 2009–05 |
URL: | http://d.repec.org/n?u=RePEc:nzb:nzbdps:2009/04&r=for |
By: | Gustavo Sánchez Bizot (StataCorp) |
Abstract: | I discuss two applications of the vec commands in this presentation. First, I use the cointegrating VAR approach discussed in Garrat et al. (2006, Global and National Macroeconometric Modelling: A Long-run Structural Approach) to fit a vector-error correction model. In contrast with the application of the traditional Johansen statistical restrictions for the identification of the coefficients of the cointegrating vectors, I use Stata to show an alternative specification of those restrictions based on the approach by Garrat et al. Second, I apply probability forecasting to simulate probability distributions for the forecasted periods. This approach produces probabilities for future single and joint events, instead of only producing point forecasts and confidence intervals. For example, we could estimate the joint probability of two-digit inflation combined with a decrease in the GDP. |
Date: | 2009–06–05 |
URL: | http://d.repec.org/n?u=RePEc:boc:msug09:07&r=for |
By: | Camilo SERRANO (University of Geneva); Martin HOESLI (University of Geneva (HEC and SFI), University of Aberdeen, Bordeaux Ecole de Management) |
Abstract: | Securitized real estate returns have traditionally been forecasted using economic variables. However, no consensus exists regarding the variables to use. Financial and real estate factors have recently emerged as an alternative set of variables useful in forecasting securitized real estate returns. This paper examines whether the predictive ability of the two sets of variables differs. We use fractional cointegration analysis to identify whether long-run nonlinear relations exist between securitized real estate and each of the two sets of forecasting variables. That is, we examine whether such relationships are characterized by long memory, short memory, mean reversion (no long-run effects) or no mean reversion (no long-run equilibrium). Empirical analyses are conducted using data for the U.S., the U.K., and Australia. The results show that financial and real estate factors generally outperform economic variables in forecasting securitized real estate returns. Long memory (long-range dependence) is generally found between securitized real estate returns and stocks, bonds, and direct real estate returns, while only short memory is found between securitized real estate returns and the economic variables. Such results imply that to forecast securitized real estate returns, it may not be necessary to identify the economic variables that are related to changing economic trends and business conditions. |
Keywords: | Fractional Cointegration, Fractionally Integrated Error Correction Model (FIECM), Forecasting, Multifactor Models, Securitized Real Estate, REITs |
JEL: | G11 C53 |
Date: | 2009–03 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp0908&r=for |
By: | Dongming Zhu; John Galbraith |
Abstract: | Financial returns typically display heavy tails and some skewness, and conditional variance models with these features often outperform more limited models. The difference in performance may be especially important in estimating quantities that depend on tail features, including risk measures such as the expected shortfall. Here, using a recent generalization of the asymmetric Student-t distribution to allow separate parameters to control skewness and the thickness of each tail, we fit daily financial returns and forecast expected shortfall for the S&P 500 index and a number of individual company stocks; the generalized distribution is used for the standardized innovations in a nonlinear, asymmetric GARCH-type model. The results provide empirical evidence for the usefulness of the generalized distribution in improving prediction of downside market risk of financial assets. <P>De façon générale, les rendements financiers sont caractérisés par des queues épaisses et une certaine asymétrie. Ainsi, les modèles à variance conditionnelle dotés de ces caractéristiques donnent de meilleurs résultats que les modèles plus limités. La différence dans les résultats obtenus peut être particulièrement importante lorsqu’il s’agit d’évaluer des quantités qui dépendent des caractéristiques des queues, y compris les mesures du risque, tel que le manque à gagner prévu. Dans le cas actuel, en recourant à une généralisation récente de la distribution asymétrique suivant la loi t de Student, de sorte que des paramètres distincts limitent l’asymétrie et l’épaisseur de chaque queue, nous intégrons les rendements financiers quotidiens et estimons le manque à gagner prévu dans le cas de l’indice S&P 500 et de certaines actions de compagnies individuelles. La distribution généralisée est utilisée pour les innovations normalisées contenues dans un modèle asymétrique non linéaire de type GARCH. Les résultats démontrent de façon empirique l’utilité de la distribution généralisée pour améliorer les prévisions au sujet du risque de perte en cas de baisse du marché des actifs financiers. |
Keywords: | asymmetric distribution, expected shortfall, NGARCH model, distribution asymétrique, manque à gagner prévu, modèle NGARCH (Nonlinear Generalized AutoRegressive Conditional Heteroscedasticity) |
JEL: | C16 G10 |
Date: | 2009–05–01 |
URL: | http://d.repec.org/n?u=RePEc:cir:cirwor:2009s-24&r=for |
By: | Herman O. Stekler (Department of Economics The George Washington University) |
Abstract: | The Census Bureau makes periodic long-term forecasts of both the total US population and the population of each of the states. Previous evaluations of these forecasts were based on the magnitude of the discrepancies between the projected and actual population figures. However, it might be inappropriate to evaluate these long-term projections with the specific quantitative statistics that have been useful in judging short-term forecasts. One of the purposes of a long range projection of each state¡¯s population is to provide a picture of the distribution of the aggregate US population among the various states. Thus the evaluation should compare the projected distribution of the total US population by states to the actual distribution. This paper uses the dissimilarity index to evaluate the accuracy of the Census projected percentage distributions of population by states. |
Date: | 2008–07 |
URL: | http://d.repec.org/n?u=RePEc:gwc:wpaper:2008-7&r=for |
By: | Askitas, Nikos (IZA); Zimmermann, Klaus F. (IZA, DIW Berlin and Bonn University) |
Abstract: | The current economic crisis requires fast information to predict economic behavior early, which is difficult at times of structural changes. This paper suggests an innovative new method of using data on internet activity for that purpose. It demonstrates strong correlations between keyword searches and unemployment rates using monthly German data and exhibits a strong potential for the method used. |
Keywords: | time-series analysis, internet, Google, keyword search, search engine, unemployment, predictions |
JEL: | C22 C82 E17 E24 E37 |
Date: | 2009–06 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp4201&r=for |
By: | ChiUng Song (Science and Technology Policy Institute); Bryan L. Boulier (Department of Economics The George Washington University); Herman O. Stekler (Department of Economics The George Washington University) |
Abstract: | Previous research on defining and measuring consensus (agreement) among forecasters has been concerned with evaluation of forecasts of continuous variables. This previous work is not relevant when the forecasts involve binary decisions: up-down or win-lose. In this paper we use Cohen¡¯s kappa coefficient, a measure of inter-rater agreement involving binary choices, to evaluate forecasts of National Football League games. This statistic is applied to the forecasts of 74 experts and 31 statistical systems that predicted the outcomes of games during two NFL seasons. We conclude that the forecasters, particularly the systems, displayed significant levels of agreement and that levels of agreement in picking game winners were higher than in picking against the betting line. There is greater agreement among statistical systems in picking game winners or picking winners against the line as the season progresses, but no change in levels of agreement among experts. High levels of consensus among forecasters are associated with greater accuracy in picking game winners, but not in picking against the line. |
Keywords: | binary forecasts, NFL, agreement, consensus, kappa coefficient |
Date: | 2008–07 |
URL: | http://d.repec.org/n?u=RePEc:gwc:wpaper:2008-6&r=for |