|
on Forecasting |
By: | Marco Fioramanti, ISTAT; Laura González Cabanillas; Bjorn Roelstraete; Salvador Adrian Ferrandis Vallterra |
Abstract: | This paper updates a previous assessment of the European Commission's track record for forecasting key economic variables (González Cabanillas and Terzi 2012) by extending the observation period to 2014. It also examines the accuracy of the Commission's forecasts over a shorter and more recent period (2000-2014) so that a comparison can be made between the performance of forecasts made before and after the Great Recession of 2008-2009. Going beyond the 2012 approach, this paper also examines the extent to which forecast errors can be explained by external or technical assumptions that prove incorrect ex post. It also updates the comparison of the Commission’s performance vis à vis the OECD, the IMF, a consensus forecast of market economists, and the ECB. Inclusion of the 2012-2014 period lowers the forecasting error for some key variables or leads to no change in others. Focussing on the years since the turn of the century, current-year and year-ahead forecasting errors for the three main variables examined (GDP growth, inflation and general government balances) have been larger in the crisis and post-crisis period (2008-2014) than in the precrisis period (2000-2007) for a large majority of Member States. This appears mainly to be the result of an anomalously large error in 2009, a year which confounded many forecasters. The country-bycountry analysis confirms the finding of earlier studies which show that the Commission's forecasts are largely unbiased. The newly-introduced panel data approach also confirms the absence of bias in current-year GDP forecasts across EU Member States but shows that year-ahead forecasts for GDP growth tend to be slightly over optimistic across the whole sample. The analysis also shows that autocorrelation of forecast errors is not a major issue in the Commission's forecasts. Other advanced tests shed more light on the performance of the Commission’s forecasts, demonstrating that they are directionally accurate and generally beat a naïve forecast but that they are not always efficient in terms of their use of all available data. The decomposition of forecast errors shows that unexpected changes in external assumptions seem to have only a limited impact on current-year GDP growth forecasts. However, more than half of the variance in year-ahead forecast errors appears to come from external assumptions that prove to be incorrect ex post. Finally, the Commission’s economic forecasts come out as being more accurate than those of the market and comparable to those of the other international institutions considered. |
JEL: | C1 E60 E66 |
Date: | 2016–03 |
URL: | http://d.repec.org/n?u=RePEc:euf:dispap:027&r=for |
By: | Higgins, Patrick C. (Federal Reserve Bank of Atlanta); Zha, Tao (Federal Reserve Bank of Atlanta); Zhong, Karen (Shanghai Jiaotong University) |
Abstract: | Although macroeconomic forecasting forms an integral part of the policymaking process, there has been a serious lack of rigorous and systematic research in the evaluation of out-of-sample model-based forecasts of China's real gross domestic product (GDP) growth and consumer price index inflation. This paper fills this research gap by providing a replicable forecasting model that beats a host of other competing models when measured by root mean square errors, especially over long-run forecast horizons. The model is shown to be capable of predicting turning points and usable for policy analysis under different scenarios. It predicts that China's future GDP growth will be of an L-shape rather than a U-shape. |
Keywords: | out of sample; policy projections; scenario analysis; probability bands; density forecasts; random walk; Bayesian priors |
JEL: | C53 E10 E40 |
Date: | 2016–07–01 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedawp:2016-07&r=for |
By: | Carlos Diaz Vela |
Abstract: | This paper shows how to extract the density of the shocks of information perceived by the Bank of England between two consecutive releases of its inflation density forecasts. These densities are used to construct a new measure of ex ante in ex ante inflation uncertainty, and a measure of news incorporation into subsequent forecasts. Also dynamic tests of point forecast optimality is constructed. It is shown that inflation uncertainty as perceived by the Bank was decreasing before the financial crisis, increasing sharply during the period 2008-2011. Since then, uncertainty seems to have stabilized, but it remains still above its pre-crisis levels. Finally, it is shown that forecast optimality is lost at some points during the financial crisis, and that there are more periods of optimal forecasts in long term than in short term forecasting. This could be also interpreted as that short term forecasts are subject to profound revisions. |
Keywords: | Inflation, density forecast, uncertainty, revisions, optimal forecasts. |
JEL: | C22 C53 C63 E31 E37 E58 |
URL: | http://d.repec.org/n?u=RePEc:lec:leecon:16/13&r=for |
By: | Rossi, Barbara; Sekhposyan, Tatevik |
Abstract: | This paper proposes a framework to implement regression-based tests of predictive ability in unstable environments, including, in particular, forecast unbiasedness and efficiency tests, commonly referred to as tests of forecast rationality. Our framework is general: it can be applied to model-based forecasts obtained either with recursive or rolling window estimation schemes, as well as to forecasts that are model-free. The proposed tests provide more evidence against forecast rationality than previously found in the Federal Reserve's Greenbook forecasts as well as survey-based private forecasts. It confirms, however, that the Federal Reserve has additional information about current and future states of the economy relative to market participants. |
Keywords: | forecast rationality; Forecasting; Greenbook; Monetary policy; real-time data; survey |
JEL: | C22 C52 C53 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11391&r=for |
By: | Carrasco, Marine; Rossi, Barbara |
Abstract: | This paper considers in-sample prediction and out-of-sample forecasting in regressions with many exogenous predictors. We consider four dimension reduction devices: principal compo- nents, Ridge, Landweber Fridman, and Partial Least Squares. We derive rates of convergence for two representative models: an ill-posed model and an approximate factor model. The theory is developed for a large cross-section and a large time-series. As all these methods depend on a tuning parameter to be selected, we also propose data-driven selection methods based on cross- validation and establish their optimality. Monte Carlo simulations and an empirical application to forecasting inflation and output growth in the U.S. show that data-reduction methods out- perform conventional methods in several relevant settings, and might effectively guard against instabilities in predictors' forecasting ability. |
Keywords: | factor models; Forecasting; GDP forecasts; large datasets; partial least squares; principal components; regularization methods; Ridge; sparsity; variable selection |
JEL: | C22 C52 C53 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11388&r=for |
By: | T. O. Benli |
Abstract: | The accuracy of the household electricity consumption forecast is vital in taking better cost effective and energy efficient decisions. In order to design accurate, proper and efficient forecasting model, characteristics of the series have to been analyzed. The source of time series data comes from Online Enerjisa System, the system of electrical energy provider in capital of Turkey, which consumers can reach their latest two year period electricity consumptions; in our study the period was May 2014 to May 2016. Various techniques had been applied in order to analyze the data; classical decomposition models; standard typed and also with the centering moving average method, regression equations, exponential smoothing models and ARIMA models. In our study, nine teen different approaches; all of these have at least diversified aspects of methodology, had been compared and the best model for forecasting were decided by considering the smallest values of MAPE, MAD and MSD. As a first step we took the time period May 2014 to May 2016 and found predicted value for June 2016 with the best forecasting model. After finding the best forecasting model and fitted value for June 2016, than validating process had been taken place; we made comparisons to see how well the real value of June 2016 and forecasted value for that specific period matched. Afterwards we made electrical consumption forecast for the following 3 months; June-September 2016 for each of five households individually. |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1607.05660&r=for |
By: | Gunes Kamber (Bank for International Settlements); James Morley (School of Economics, UNSW Business School, UNSW); Benjamin Wong (Reserve Bank of New Zealand) |
Abstract: | The Beveridge-Nelson (BN) trend-cycle decomposition based on autoregressive forecasting models of U.S. quarterly real GDP growth produces estimates of the output gap that are strongly at odds with widely-held beliefs about the amplitude, persistence, and even sign of transitory movements in economic activity. These antithetical attributes are related to the autoregressive coefficient estimates implying a very high signal-to-noise ratio in terms of the variance of trend shocks as a fraction of the overall quarterly forecast error variance. When we impose a lower signal-to-noise ratio, the resulting BN decomposition, which we label the “BN filter”, produces a more intuitive estimate of the output gap that is large in amplitude, highly persistent, and typically positive in expansions and negative in recessions. Real time estimates from the BN filter are also reliable in the sense that they are subject to smaller revisions and predict future output growth and inflation better than for other methods of trend-cycle decomposition that also impose a low signal-to-noise ratio, including deterministic detrending, the Hodrick-Prescott filter, and the bandpass filter. |
Keywords: | Beveridge-Nelson decomposition, output gap, signal-to-noise ratio |
JEL: | C18 E17 E32 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:swe:wpaper:2016-09&r=for |
By: | Marc S. PAOLELLA (University of Zurich and Swiss Finance Institute); Pawel POLAK (University of Zurich and Swiss Finance Institute) |
Abstract: | The paper proposes a framework for large-scale portfolio optimization which accounts for all the major stylized facts of multivariate financial returns, including volatility clustering, dynamics in the dependency structure, asymmetry, heavy tails, and nonellipticity. It introduces a so-called risk fear portfolio strategy which combines portfolio optimization with active risk monitoring. The former selects optimal portfolio weights. The later, independently, initiates market exit in case of excessive risks. The strategy agrees with the stylized fact of stock market major sell-offs during the initial stage of market downturns. The advantages of the new framework are illustrated with an extensive empirical study. It leads to superior multivariate density and Value-at-Risk forecasting, and better portfolio performance. The proposed risk fear portfolio strategy outperforms various competing types of optimal portfolios, even in the presence of conservative transaction costs and frequent rebalancing. The risk monitoring of the optimal portfolio can serve as an early warning system against large market risks. In particular, the new strategy avoids all the losses during the 2008 financial crisis, and it profits from the subsequent market recovery. |
Keywords: | COMFORT; Financial Crises; Portfolio Optimization; Risk Monitoring |
JEL: | C51 C53 C58 G11 G17 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp1517&r=for |