|
on Forecasting |
By: | Elliott, Graham; Timmermann, Allan G |
Abstract: | Forecasts guide decisions in all areas of economics and finance and their value can only be understood in relation to, and in the context of, such decisions. We discuss the central role of the loss function in helping determine the forecaster's objectives and use this to present a unified framework for both the construction and evaluation of forecasts. Challenges arise from the explosion in the sheer volume of predictor variables under consideration and the forecaster's ability to entertain an endless array of functional forms and time-varying specifications, none of which may coincide with the `true' model. Methods for comparing the forecasting performance of pairs of models or evaluating the ability of the best of many models to beat a benchmark specification are also reviewed. |
Keywords: | economic forecasting; forecast evaluation; loss function |
JEL: | C53 |
Date: | 2007–03 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:6158&r=for |
By: | Norman Swanson (Rutgers University); Geetesh Bhardwaj (Rutgerst University) |
Abstract: | This chapter builds on previous work by Bhardwaj and Swanson (2004) who address the notion that many fractional I(d) processes may fall into the “empty box” category, as discussed in Granger (1999). However, rather than focusing primarily on linear models, as do Bhardwaj and Swanson, we analyze the business cycle effects on the forecasting performance of these ARFIMA, AR, MA, ARMA, GARCH, and STAR models. This is done via examination of ex ante forecasting evidence based on an updated version of the absolute returns series examined by Ding, Granger and Engle (1993); and via the use of Diebold and Mariano (1995) and Clark and McCracken (2001) predictive accuracy tests. Results are presented for a variety of forecast horizons and for recursive and rolling estimation schemes. We find that the business cycle does not seem to have an effect on the relative forecasting performance of ARFIMA models. |
Keywords: | fractional integration, long horizon prediction, long memory, parameter estimation error, stock returns |
JEL: | C15 C22 C53 |
Date: | 2006–09–22 |
URL: | http://d.repec.org/n?u=RePEc:rut:rutres:200613&r=for |
By: | Norman Swanson (Rutgers University); Oleg Korenok (Virginia Commonwealth University) |
Abstract: | In this paper we construct output gap and inflation predictions using a variety of DSGE sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson (2005a) as well as predictive accuracy tests due to Diebold and Mariano (1995) andWest (1996) are used to compare the alternative models. A number of simple time series prediction models (such as autoregressive and vector autoregressive (VAR) models) are additionally used as strawman models. Given that DSGE model restrictions are routinely nested within VAR models, the addition of our strawman models allows us to indirectly assess the usefulness of imposing theoretical restrictions implied by DSGE models on unrestricted econometric models. With respect to predictive density evaluation, our results suggest that the standard sticky price model discussed in Calvo (1983) is not outperformed by the same model augmented either with information or indexation, when used to predict the output gap. On the other hand, there are clear gains to using the more recent models when predicting inflation. Results based on mean square forecast error analysis are less clear-cut, although the standard sticky price model fares best at our longest forecast horizon of 3 years, and performs relatively poorly at shorter horizons. When the strawman time series models are added to the picture, we find that the DSGE models still fare very well, often winning our forecast competitions, suggesting that theoretical macroeconomic restrictions yield useful additional information for forming macroeconomic forecasts. |
Keywords: | model selection, predictive density, sticky information, sticky price |
JEL: | C32 E12 E3 |
Date: | 2006–09–22 |
URL: | http://d.repec.org/n?u=RePEc:rut:rutres:200615&r=for |
By: | Norman Swanson (Rutgers University); Nii Ayi Armah (Rutgers University) |
Abstract: | In this chapter we discuss model selection and predictive accuracy tests in the context of parameter and model uncertainty under recursive and rolling estimation schemes. We begin by summarizing some recent theoretical findings, with particular emphasis on the construction of valid bootstrap procedures for calculating the impact of parameter estimation error on the class of test statistics with limiting distributions that are functionals of Gaussian processes with covariance kernels that are dependent upon parameter and model uncertainty. We then provide an example of a particular test which falls in this class. Namely, we outline the so-called Corradi and Swanson (CS: 2002) test of (non)linear out-of-sample Granger causality. Thereafter, we carry out a series of Monte Carlo experiments examining the properties of the CS and a variety of other related predictive accuracy and model selection type tests. Finally, we present the results of an empirical investigation of the marginal predictive content of money for income, in the spirit of Stock andWatson (1989), Swanson (1998), Amato and Swanson (2001), and the references cited therein. We find that there is evidence of predictive causation when in-sample estimation periods are ended any time during the 1980s, but less evidence during the 1970s. Furthermore, recursive estimation windows yield better prediction models when prediction periods begin in the 1980s, while rolling estimation windows yield better models when prediction periods begin during the 1970s and 1990s. Interestingly, these two results can be combined into a coherent picture of what is driving our empirical results. Namely, when recursive estimation windows yield lower overall predictive MSEs, then bigger prediction models that include money are preferred, while smaller models without money are preferred when rolling models yield the lowest MSE predictors. |
Keywords: | block bootstrap, forecasting, nonlinear causality, recursive estimation scheme, rolling estimation schememodel misspecification |
JEL: | C22 C51 |
Date: | 2006–09–22 |
URL: | http://d.repec.org/n?u=RePEc:rut:rutres:200619&r=for |
By: | Valentina Corradi (Queen Mary, University of London); Norman Swanson (Rutgers University) |
Abstract: | This chapter discusses estimation, specification testing, and model selection of predictive density models. In particular, predictive density estimation is briefly discussed, and a variety of different specification and model evaluation tests due to various authors including Christoffersen and Diebold (2000), Diebold, Gunther and Tay (1998), Diebold, Hahn and Tay (1999), White (2000), Bai (2003), Corradi and Swanson (2005a,b,c,d), Hong and Li (2003), and others are reviewed. Extensions of some existing techniques to the case of out-of-sample evaluation are also provided, and asymptotic results associated with these extensions are outlined. |
Keywords: | block bootstrap, density and conditional distribution, forecast accuracy testing, mean square error, parameter estimation error |
JEL: | C22 C51 |
Date: | 2006–10–02 |
URL: | http://d.repec.org/n?u=RePEc:rut:rutres:200621&r=for |
By: | Norman Swanson (Rutgers University); Valentina Corradi (Queen Mary, University of London) |
Abstract: | Our objectives in this paper are twofold. First, we introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two examples where predictive accuracy tests are made operational using our new bootstrap procedures. In one application, we outline a consistent test for out-of-sample nonlinear Granger causality, and in the other we outline a test for selecting amongst multiple alternative forecasting models, all of which are possibly misspecified. More specifically, our examples extend the White (2000) reality check to the case of non vanishing parameter estimation error, and extend the integrated conditional moment tests of Bierens (1982, 1990) and Bierens and Ploberger (1997) to the case of out-of-sample prediction. In both examples, appropriate re-centering of the bootstrap score is required in order to ensure that the tests have asymptotically correct size, and the need for such re-centering is shown to arise quite naturally when testing hypotheses of predictive accuracy. In a Monte Carlo investigation, we compare the finite sample properties of our block bootstrap procedures with the parametric bootstrap due to Kilian (1999); all within the context of various encompassing and predictive accuracy tests. An empirical illustration is also discussed, in which it is found that unemployment appears to have nonlinear marginal predictive content for inflation. |
Keywords: | block bootstrap, nonlinear causality, parameter estimation error, reality check, recursive estimation scheme |
JEL: | C22 C51 |
Date: | 2006–09–22 |
URL: | http://d.repec.org/n?u=RePEc:rut:rutres:200618&r=for |
By: | Andrea Carriero (Queen Mary, University of London); Massimiliano Marcellino (IEP-Bocconi University, IGIER and CEPR) |
Abstract: | In this paper we provide an overview of recent developments in the methodology for the construction of composite coincident and leading indexes, and apply them to the UK. In particular, we evaluate the relative merits of factor based models and Markov switching specifications for the construction of coincident and leading indexes. For the leading indexes we also evaluate the performance of probit models and pooling. The results indicate that alternative methods produce similar coincident indexes, while there are more marked di.erences in the leading indexes. |
Keywords: | Forecasting, Business cycles, Leading indicators, Coincident indicators, Turning points |
JEL: | E32 E37 C53 |
Date: | 2007–03 |
URL: | http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp590&r=for |
By: | Carlo A. Favero, Linlin Niu and Luca Sala |
Abstract: | This paper addresses the issue of forecasting the term structure. We provide a unified state-space modelling framework that encompasses different existing discrete-time yield curve models. within such framework we analyze the impact on forecasting performance of two crucial modelling choices, i.e. the imposition of no-arbitrage restrictions and the size of the information set used to extract factors. Using US yield curve data, we find that: a. macro factors are very useful in forecasting at medium/long forecasting horizon; b. financial factors are useful in short run forecasting; c. no-arbitrage models are effective in shrinking the dimensionality of the parameter space and, when supplemented with additional macro information, are very effective in forecasting; d. within no-arbitrage models, assuming time-varying risk price is more favorable than assuming constant risk price for medium horizon-maturity forecast when yield factors dominate the information set, and for short horizon and long maturity forecast when macro factors dominate the information set; e. however, given the complexity and the highly non-linear parameterization of no-arbitrage models, it is very difficult to exploit within this type of models the additional information offered by large macroeconomic datasets. |
URL: | http://d.repec.org/n?u=RePEc:igi:igierp:318&r=for |
By: | Andrea Carriero (Queen Mary, University of London) |
Abstract: | Even if there is a fairly large evidence against the Expectations Hypothesis (EH) of the term structure of interest rates, there still seems to be an element of truth in the theory which may be exploited for forecasting and simulation. This paper formalizes this idea by proposing a way to use the EH without imposing it dogmatically. It does so by using a Bayesian framework such that the extent to which the EH is imposed on the data is under the control of the researcher. This allows to study a continuum of models ranging from one in which the EH holds exactly to one in which it does not hold at all. In between these two extremes, the EH features transitory deviations which may be explained by time varying (but stationary) term premia and errors in expectations. Once cast in this framework, the EH holds on average (i.e. after integrating out the effect of the transitory deviations) and can be safely and effectively used for forecasting and simulation. |
Keywords: | Bayesian VARs, Expectations theory, Term structure |
JEL: | C11 E43 E44 E47 |
Date: | 2007–03 |
URL: | http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp591&r=for |