|
on Forecasting |
By: | Kirstin Hubrich; David F. Hendry (Research Department European Central Bank) |
Abstract: | We explore whether forecasting an aggregate variable using information on its disaggregate components can improve the prediction mean squared error over forecasting the disaggregates and aggregating those forecasts, or using only aggregate information in forecasting the aggregate. An implication of a general theory of prediction is that the first should outperform the alternative methods to forecasting the aggregate in population. However, forecast models are based on sample information. The data generation process and the forecast model selected might differ. We show how changes in collinearity between regressors affect the bias-variance trade-off in model selection and how the criterion used to select variables in the forecasting model affects forecast accuracy. We investigate why forecasting the aggregate using information on its disaggregate components improves forecast accuracy of the aggregate forecast of Euro area inflation in some situations, but not in others. |
Keywords: | Disaggregate information, predictability, forecast model selection, VAR, factor models |
JEL: | C32 C53 E31 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:270&r=for |
By: | Ali Dib; Kevin Moran |
Abstract: | This paper documents the out-of-sample forecasting accuracy of the New Keynesian Model for Canadian data. We repeatedly estimate the model over samples of increasing lengths, forecasting out-of-sample one to four quarters ahead at each step. We then compare these forecasts with those arising from an unrestricted VAR using recent econometric tests. We show that the accuracy of the New Keynesian model's forecasts compares favourably to that of the benchmark. The principle of parsimony is invoked to explain these results |
Keywords: | out-of-sample forecasting ability, estimated DGSE models |
JEL: | E32 E37 E58 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:235&r=for |
By: | Frédérick Demers; David Dupuis |
Abstract: | The authors investigate whether the aggregation of region-specific forecasts improves upon the direct forecasting of Canadian GDP growth. They follow Marcellino, Stock, and Watson (2003) and use disaggregate information to predict aggregate GDP growth. An array of multivariate forecasting models are considered for five Canadian regions, and single-equation models are considered for direct forecasting of Canadian GDP. The authors focus on forecasts at 1-, 2-, 4-, and 8-quarter horizons, which best represent the monetary policy transmission framework of long and variable lags. Region-specific forecasts are aggregated to the country level and tested against aggregate country-level forecasts. The empirical results show that Canadian GDP growth forecasts can be improved by indirectly forecasting the GDP growth of the Canadian economic regions using a multivariate approach, namely a vector autoregression and moving average with exogenous regressors (VARMAX) model. |
Keywords: | Econometric and statistical methods |
JEL: | E17 C32 C53 |
Date: | 2005 |
URL: | http://d.repec.org/n?u=RePEc:bca:bocawp:05-31&r=for |
By: | Christoph Schleicher; Francisco Barillas |
Abstract: | This paper examines evidence of long- and short-run co-movement in Canadian sectoral output data. Our framework builds on a vector-error-correction representation that allows to test for and compute full-information maximum-likelihood estimates of models with codependent cycle restrictions. We find that the seven sectors under consideration contain five common trends and five codependent cycles and use their estimates to obtain a multivariate Beveridge-Nelson decomposition to isolate and compare the common components. A forecast error variance decomposition indicates that some sectors, such as manufacturing and construction, are subject to persistent transitory shocks, whereas other sectors, such as financial services, are not. We also find that imposing common feature restrictions leads to a non-trivial gain in the ability to forecast both aggregate and sectoral output. Among the main conclusions is that manufacturing, construction, and the primary sector are the most important sources of business cycle fluctuations for the Canadian economy. |
Keywords: | common features, business cycles, vector autoregressions |
JEL: | C15 C22 C32 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:214&r=for |
By: | Gonzalo Llosa (Interamerican Development Bank and Central bank of Peru); Vicente Tuesta (Central Bank of Peru); Marco Vega (Central Bank of Peru) |
Abstract: | We build a simple non-structural BVAR forecasting framework to predict key Peruvian macroeconomic data, in particular, inflation and output. Unlike standard applications we build our Litterman prior specification based on the fact that the structure driving the dynamics of the economy might have shifted towards a state where a clear nominal anchor has become well grounded (Inflation Targeting). We compare different BVAR specifications with respect to a ”naive” random walk and we find that they outperform the random walk in terms of inflation forecasts at all horizons. However, our PBI forecasts are not accurate enough to beat a ”naive” random walk. |
Keywords: | Bayesian VAR, Forecasting, Inflation Targeting |
JEL: | E31 E37 E47 C11 C53 |
Date: | 2005–11 |
URL: | http://d.repec.org/n?u=RePEc:rbp:wpaper:2005-007&r=for |
By: | Carlos Capistrán-Carmona |
Abstract: | This paper documents that inflation forecasts of the Federal Reserve systematically under-predicted inflation before Volker's appointment as Chairman and systematically over-predicted it afterward. It also documents that, under quadratic loss, commercial forecasts have information not contained in the forecasts of the Federal Reserve. It demonstrates that this evidence leads to a rejection of the joint hypothesis that the Federal Reserve has rational expectations and quadratic loss. To investigate the causes of this failure, this paper uses moment conditions derived from a model of an inflation targeting central bank to back out the loss function implied by the forecasts of the Federal Reserve. It finds that the cost of having inflation above the target was larger than the cost of having inflation below it for the post-Volker Federal Reserve, and that the opposite was true for the pre-Volker era. Once these asymmetries are taken into account, the Federal Reserve is found to be rational and to efficiently incorporate the information contained in forecasts from the Survey of Professional Forecasters |
Keywords: | Asymmetric loss function, Inflation forecasts, Forecast Evaluation |
JEL: | C53 E52 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:127&r=for |
By: | J. Huston McCulloch |
Abstract: | Adaptive Least Squares (ALS), i.e. recursive regression with asymptotically constant gain, as proposed by Ljung (1992), Sargent (1993, 1999), and Evans and Honkapohja (2001), is an increasingly widely-used method of estimating time-varying relationships and of proxying agents’ time-evolving expectations. This paper provides theoretical foundations for ALS as a special case of the generalized Kalman solution of a Time Varying Parameter (TVP) model. This approach is in the spirit of that proposed by Ljung (1992) and Sargent (1999), but unlike theirs, nests the rigorous Kalman solution of the elementary Local Level Model, and employs a very simple, yet rigorous, initialization. Unlike other approaches, the proposed method allows the asymptotic gain to be estimated by maximum likelihood (ML). The ALS algorithm is illustrated with univariate time series models of U.S. unemployment and inflation. Because the null hypothesis that the coefficients are in fact constant lies on the boundary of the permissible parameter space, the usual regularity conditions for the chi-square limiting distribution of likelihood-based test statistics are not met. Consequently, critical values of the Likelihood Ratio test statistics are established by Monte Carlo means and used to test the constancy of the parameters in the estimated models. |
Keywords: | Kalman Filter, Adaptive Learning, Adaptive Least Squares, Time Varying Parameter Model, Natural Unemployment Rate, Inflation Forecasting |
JEL: | C22 E37 E31 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:239&r=for |
By: | Torben G. Andersen; Tim Bollerslev; Francis X. Diebold |
Abstract: | A rapidly growing literature has documented important improvements in financial return volatility measurement and forecasting via use of realized variation measures constructed from high-frequency returns coupled with simple modeling procedures. Building on recent theoretical results in Barndorff-Nielsen and Shephard (2004a, 2005) for related bi-power variation measures, the present paper provides a practical and robust framework for non-parametrically measuring the jump component in asset return volatility. In an application to the DM/$ exchange rate, the S&P500 market index, and the 30-year U.S. Treasury bond yield, we find that jumps are both highly prevalent and distinctly less persistent than the continuous sample path variation process. Moreover, many jumps appear directly associated with specific macroeconomic news announcements. Separating jump from non-jump movements in a simple but sophisticated volatility forecasting model, we find that almost all of the predictability in daily, weekly, and monthly return volatilities comes from the non-jump component. Our results thus set the stage for a number of interesting future econometric developments and important financial applications by separately modeling, forecasting, and pricing the continuous and jump components of the total return variation process. |
JEL: | C1 G1 |
Date: | 2005–11 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:11775&r=for |
By: | Glaser, Markus (Sonderforschungsbereich 504); Langer, Thomas (Westfälischen Wilhelms-Universität Münster Lehrstuhl für BWL, insbesondere Finanzierung); Reynders, Jens; Weber, Martin (Lehrstuhl für ABWL, Finanzwirtschaft, insb. Bankbetriebslehre) |
Abstract: | In this study, we analyze whether individual expectations of stock returns are influenced by the specific elicitation mode (i.e. whether forecasters have to state future price levels or directly future returns). We thus examine whether there are framing effects in stock market forecasts. We present questionnaire responses of about 250 students from two German universities. Participants were asked to state median forecasts as well as confidence intervals for seven stock market time series. Using a between subject design, one half of the subjects was asked to state future price levels, the other group was directly asked for returns. The main results of our study can be summarized as follows. There is a highly significant framing effect. For upward sloping time series, the return forecasts given by investors who are asked directly for returns are significantly higher than those stated by investors who are asked for prices. For downward sloping time series, the return forecasts given by investors who are asked directly for returns are significantly lower than those stated by investors who are asked for prices. Furthermore, our data shows that subjects underestimate the volatility of stock returns, indicating overconfidence. As a new insight, we find that the strength of the overconfidence effect in stock market forecasts is highly significantly affected by the fact whether subjects provide price or return forecasts. Volatility estimates are lower (and the overconfidence bias is thus stronger) when subjects are asked for returns compared to price forecasts. Moreover, we find that financial education improves answers of subjects. The observed framing effect and the overconfidence bias are less pronounced for subjects with higher financial education. |
Date: | 2005–11–03 |
URL: | http://d.repec.org/n?u=RePEc:xrs:sfbmaa:05-40&r=for |
By: | Ivan Baboucek; Martin Jancar |
Abstract: | The paper concerns macro-prudential analysis. It uses an unrestricted VAR model to empirically investigate transmission involving a set of macroeconomic variables describing the development of the Czech economy and the functioning of its credit channel in the past eleven years. Its novelty lies in the fact that it provides the first systematic assessment of the links between loan quality and macroeconomic shocks in the Czech context. The VAR methodology is applied to monthly data transformed into percentage change. The out-of-sample forecast indicates that the most likely outlook for the quality of the banking sector’s loan portfolio is that up to the end of 2006 the share of non-performing loans in it will follow a slightly downward trend below double-digit rates. The impulse response is augmented by stress testing exercises that enable us to determine a macroeconomic early warning signal of any worsening in the quality of banks’ loans. The paper suggests that the Czech banking sector has attained a considerable ability to withstand a credit risk shock and that the banking sector’s stability is compatible both with price stability and with economic growth. Despite being devoted to empirical investigation, the paper pays great attention to methodological issues. At the same time it tries to present both the VAR model and its results transparently and to openly discuss their weak points, which to a large degree can be attributed to data constraints or to the evolutionary nature of an economy in transition. |
Keywords: | Czech Republic, Macro-prudential analysis, Non-performing loans, VAR model. |
JEL: | G18 G21 C51 |
URL: | http://d.repec.org/n?u=RePEc:cnb:wpaper:2005/1&r=for |
By: | A. Onatski; V. Karguine |
Abstract: | Data in which each observation is a curve occur in many applied problems. This paper explores prediction in time series in which the data is generated by a curve-valued autoregression process. It develops a novel technique, the predictive factor decomposition, for estimation of the autoregression operator, which is designed to be better suited for prediction purposes than the principal components method. The technique is based on finding a reduced-rank approximation to the autoregression operator that minimizes the norm of the expected prediction error. Implementing this idea, we relate the operator approximation problem to an eigenvalue problem for an operator pencil that is formed by the cross-covariance and covariance operators of the autoregressive process. We develop an estimation method based on regularization of the empirical counterpart of this eigenvalue problem, and prove that with a certain choice of parameters, the method consistently estimates the predictive factors. In addition, we show that forecasts based on the estimated predictive factors converge in probability to the optimal forecasts. The new method is illustrated by an analysis of the dynamics of the term structure of Eurodollar futures rates. We restrict the sample to the period of normal growth and find that in this subsample the predictive factor technique not only outperforms the principal components method but also performs on par with the best available prediction methods |
Keywords: | Functional data analysis; Dimension reduction, Reduced-rank regression; Principal component; Predictive factor, Generalized eigenvalue problem; Term structure; Interest rates |
JEL: | C23 C53 E43 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:59&r=for |
By: | Fabio Trojani; Francesco Audrino |
Abstract: | We propose a multivariate nonparametric technique for generating reliable historical yield curve scenarios and confidence intervals. The approach is based on a Functional Gradient Descent (FGD) estimation of the conditional mean vector and volatility matrix of a multivariate interest rate series. It is computationally feasible in large dimensions and it can account for non-linearities in the dependence of interest rates at all available maturities. Based on FGD we apply filtered historical simulation to compute reliable out-of-sample yield curve scenarios and confidence intervals. We back-test our methodology on daily USD bond data for forecasting horizons from 1 to 10 days. Based on several statistical performance measures we find significant evidence of a higher predictive power of our method when compared to scenarios generating techniques based on (i) factor analysis, (ii) a multivariate CCC-GARCH model, or (iii) an exponential smoothing volatility estimators as in the RiskMetrics approach |
Keywords: | Conditional mean and volatility estimation; Filtered Historical Simulation; Functional Gradient Descent; Term structure; Multivariate CCC-GARCH models |
JEL: | C14 C15 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:14&r=for |
By: | Eleftherios Giovanis (Chamber of Commerce of Serres-Greece) |
Abstract: | For hundred years the future was occupying the persons. The ancient Greeks, the Romans, the Egyptians, the Indians, the Chinese and other great ancient cultures, but also the modern, as the English, Germans and the Americans and with the help of the development of technology and computers they are trying and they have improved at a great degree the forecasts for the future, as in the meteorology, in the seismology, in the statistics, in the administration, in the economy. Human was always interested and is always interested still more for the future despite for the past or the present. Where I will be working and under which conditions? I wonder I will be married and if I am I will be separate sometime? They will fire me from my work? Is there any possibility for a nuclear war to be started? And if does become a war which extent will it take and which consequences will have this war? Declare war in my competitor decreasing my prices and how many? Or it is better for me to ally with him, because the losses that I may have in this case where I lose the war will be disastrous? There is also the same effect in the military war. Should i bomb with nuclear weapons or should I think also the pollution of environment that it will be created? Such like these and many other various questions occupy billions persons daily. This research certain does not aim neither aspire to find answers, , in all questions, but neither in few. It simply presents an alternative method of forecast for the grant, the private consumption, the gross domestic and national product and still it can be applied, with reserves and always with a lot of trials, in the meteorology and in other sciences. |
Keywords: | basic econometrics, moving average, moving median, forecasting, Arima, Armed-autoregressive moving median model. |
JEL: | C1 C2 C3 C4 C5 C8 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0511013&r=for |
By: | Kesten C. Green (Monash University); J. Scott Armstrong (Wharton School, University of Pennsylvania) |
JEL: | P Q Z |
Date: | 2005–11–13 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpot:0511003&r=for |
By: | Marc P. Giannoni; Jean Boivin |
Abstract: | Standard practice for the estimation of dynamic stochastic general equilibrium (DSGE) models maintains the assumption that economic variables are properly measured by a single indicator, and that all relevant information for the estimation is adequately summarized by a small number of data series, whether or not measurement error is allowed for. However, recent empirical research on factor models has shown that information contained in large data sets is relevant for the evolution of important macroeconomic series. This suggests that conventional model estimates and inference based on estimated DSGE models are likely to be distorted. In this paper, we propose an empirical framework for the estimation of DSGE models that exploits the relevant information from a data-rich environment. This framework provides an interpretation of all information contained in a large data set through the lenses of a DSGE model. The estimation involves Bayesian Markov-Chain Monte-Carlo (MCMC) methods extended so that the estimates can, in some cases, inherit the properties of classical maximum likelihood estimation. We apply this estimation approach to a state-of-the-art DSGE monetary model. Treating theoretical concepts of the model --- such as output, inflation and employment --- as partially observed, we show that the information from a large set of macroeconomic indicators is important for accurate estimation of the model. It also allows us to improve the forecasts of important economic variables |
Keywords: | DSGE models, model estimation, measurement error, large data sets, factor models, MCMC techniques, Bayesian estimation |
JEL: | E52 E3 C32 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:431&r=for |
By: | Magdalena E. Sokalska; Ananda Chanda (Finance New York University); Robert F. Engle |
Abstract: | This paper proposes a new way of modeling and forecasting intraday returns. We decompose the volatility of high frequency asset returns into components that may be easily interpreted and estimated. The conditional variance is expressed as a product of daily, diurnal and stochastic intraday volatility components. This model is applied to a comprehensive sample consisting of 10-minute returns on more than 2500 US equities. We apply a number of different specifications. Apart from building a new model, we obtain several interesting forecasting results. In particular, it turns out that forecasts obtained from the pooled cross section of groups of companies seem to outperform the corresponding forecasts from company-by-company estimation. |
Keywords: | ARCH, Intra-day Returns, Volatility |
JEL: | C22 G15 C53 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:409&r=for |
By: | James Mitchell (NIESR NIESR, London) |
Abstract: | Recent work has found that, without the benefit of hindsight, it can prove difficult for policy-makers to pin down accurately the current position of the output gap; real-time estimates are unreliable. However, attention primarily has focused on output gap point estimates alone. But point forecasts are better seen as the central points of ranges of uncertainty; therefore some revision to real-time estimates may not be surprising. To capture uncertainty fully density forecasts should be used. This paper introduces, motivates and discusses the idea of evaluating the quality of real-time density estimates of the output gap. It also introduces density forecast combination as a practical means to overcome problems associated with uncertainty over the appropriate output gap estimator. An application to the Euro area illustrates the use of the techniques. Simulated out-of-sample experiments reveal that not only can real-time point estimates of the Euro area output gap be unreliable, but so can measures of uncertainty associated with them. The implications for policy-makers use of Taylor-type rules are discussed and illustrated. We find that Taylor-rules that exploit real-time output gap density estimates can provide reliable forecasts of the ECB's monetary policy stance only when alternative density forecasts are combined |
Keywords: | Output gap; Real-Time; Density Forecasts; Density Forecast Combination; Taylor Rules |
JEL: | E32 C53 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:52&r=for |