
on Forecasting 
By:  Florian Huber; Jesus CrespoCuaresma; Martin Feldkircher 
Abstract:  This paper puts forward a Bayesian version of the global vector autoregressive model (BGVAR) that accommodates international linkages across countries in a system of vec tor autoregressions. We compare the predictive performance of BGVAR models for the one and fourquarter ahead forecast horizon for standard macroeconomic variables (real GDP, inflation, the real exchange rate and interest rates). Our results show that taking international linkages into account improves forecasts of inflation, real GDP and the real exchange rate, while for interest rates forecasts of univariate benchmark models remain difficult to beat. Our Bayesian version of the GVAR model outperforms forecasts of the standard cointegrated VAR for practically all variables and at both forecast horizons. The comparison of prior elicitation strategies indicates that the use of the stochastic search variable selection (SSVS) prior tends to improve outofsample predictions systematically. This finding is confirmed by density forecast measures, for which the predictive ability of the SSVS prior is the best among all priors entertained for all variables at all forecasting horizons. 
Keywords:  Global vector autoregressions; forecasting; prior sensitivity analysis; 
JEL:  C32 F44 E32 O54 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa14p25&r=for 
By:  Christian Pierdzioch (HelmutSchmidtUniversity, Department of Economics, Holstenhofweg, Hamburg, Germany); Monique B. Reid (Stellenbosch University, Department of Economics, Private Bag X1, Matieland, South Africa, 7602.); Rangan Gupta (Department of Economics, University of Pretoria) 
Abstract:  Using forecasts of the inflation rate in South Africa, we study the rationality of forecasts and the shape of forecasters’ loss function. When we study microlevel data of individual forecasts, we find mixed evidence of an asymmetric loss function, suggesting that inflation forecasters are heterogeneous with respect to the shape of their loss function. We also find strong evidence that inflation forecasts are in line with forecast rationality. When we poolthe data, and study sectoral inflation forecasts of financial analysts, trade unions, and the business sector, we find evidence for asymmetry in the loss function, and against forecast rationality. Upon comparing the microlevel results with those for pooled and sectoral data, we conclude that forecast rationality should be assessed based on microlevel data, and that freer access to this data would allow more rigorous analysis and discussion of the information content of the surveys. 
Keywords:  Inflation rate; Forecasting; Loss function; Rationality 
JEL:  C53 D82 E37 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:pre:wpaper:201475&r=for 
By:  Baumeister, Christiane; Kilian, Lutz; Lee, Thomas K 
Abstract:  The answer depends on the objective. The approach of combining five of the leading forecasting models with equal weights dominates the strategy of selecting one model and using it for all horizons up to two years. Even more accurate forecasts, however, are obtained when allowing the forecast combinations to vary across forecast horizons. While the latter approach is not always more accurate than selecting the single most accurate forecasting model by horizon, its accuracy can be shown to be much more stable over time. The MSPE of realtime pooled forecasts is between 3% and 29% lower than that of the nochange forecast and its directional accuracy as high as 73%. Our results are robust to alternative oil price measures and apply to monthly as well as quarterly forecasts. We illustrate how forecast pooling may be used to produce realtime forecasts of the real and the nominal price of oil in a format consistent with that employed by the U.S. Energy Information Administration in releasing its shortterm oil price forecasts, and we compare these forecasts during key historical episodes. 
Keywords:  forecast combination; forecast pooling; oil price; realtime data; refiners' acquisition cost; WTI 
JEL:  C53 Q43 
Date:  2014–07 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:10075&r=for 
By:  Minford, Patrick; Xu, Yongdeng; Zhou, Peng 
Abstract:  Outofsample forecasting tests of DSGE models against timeseries benchmarks such as an unrestricted VAR are increasingly used to check a) the specification b) the forecasting capacity of these models. We carry out a Monte Carlo experiment on a widelyused DSGE model to investigate the power of these tests. We find that in specification testing they have weak power relative to an insample indirect inference test; this implies that a DSGE model may be badly misspecified and still improve forecasts from an unrestricted VAR. In testing forecasting capacity they also have quite weak power, particularly on the lefthand tail. By contrast a model that passes an indirect inference test of specification will almost definitely also improve on VAR forecasts. 
Keywords:  DSGE; forecast performance; indirect inference; out of sample forecasts; specification tests; VAR 
JEL:  E10 E17 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:10239&r=for 
By:  Stefan Bruder 
Abstract:  Path forecasts, defined as sequences of individual forecasts, generated by vector autoregressions are widely used in applied work. It has been recognized that a profound econometric analysis requires, besides the path forecast, a joint prediction region that contains the whole future path with a prespecified coverage probability. The forecasting literature offers several different methods of computing joint prediction regions, where the existing methods are either bootstrap based or rely on asymptotic results. The aim of this paper is to investigate the finitesample performance of three methods for constructing joint prediction regions in various scenarios via Monte Carlo simulations. 
Keywords:  Path forecast, joint prediction region, Monte Carlo simulation 
JEL:  C15 C32 C53 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:zur:econwp:181&r=for 
By:  Ball, Ryan; Ghysels, Eric; Zhou, Huan 
Abstract:  Can we design statistical models to predict corporate earnings which either perform as well as, or even better than analysts? If we can, then we might consider automating the process, and notably apply it to small and international firms which typically have either sparse or no analyst coverage. There are at least two challenges: (1) analysts use realtime data whereas statistical models often rely on stale data and (2) analysts use potentially large set of observations whereas models often are frugal with data series. In this paper we introduce newlydeveloped mixed frequency regression methods that are able to synthesize rich realtime data and predict earnings outofsample. Our forecasts are shown to be systematically more accurate than analysts' consensus forecasts, reducing their forecast errors by 15% to 30% on average, depending on forecast horizon. 
Keywords:  forecast combination; MIDAS regression; realtime data 
JEL:  C53 M40 M41 
Date:  2014–10 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:10186&r=for 
By:  Davide Pettenuzzo (Department of Economics, Brandeis University); Francesco Ravazzolo (Norges Bank (Central Bank of Norway) and BI Norwegian Business School) 
Abstract:  We propose a novel Bayesian model combination approach where the combination weights depend on the past forecasting performance of the individual models entering the combination through a utilitybased objective function. We use this approach in the context of stock return predictability and optimal portfolio decisions, and investigate its forecasting performance relative to a host of existing combination schemes. We find that our method produces markedly more accurate predictions than the existing model combinations, both in terms of statistical and economic measures of outofsample predictability. We also investigate the role of our model combination method in the presence of model instabilities, by considering predictive regressions that feature timevarying regression coecients and stochastic volatility. We find that the gains from using our model combination method increase significantly when we allow for instabilities in the individual models entering the combination. 
Keywords:  Bayesian econometrics, Timevarying parameters, Model combinations, Portfolio choice 
JEL:  C11 C22 G11 G12 
Date:  2014–11–24 
URL:  http://d.repec.org/n?u=RePEc:bno:worpap:2014_15&r=for 
By:  Piergiorgio Alessandri (Bank of Italy); Haroon Mumtaz (Queen Mary University of London) 
Abstract:  When do financial markets help in predicting economic activity? With incomplete markets, the link between financial and real economy is statedependent and financial indicators may turn out to be useful particularly in forecasting "tail" macroeconomic events. We examine this conjecture by studying Bayesian predictive distributions for output growth and inflation in the US between 1983 and 2012, comparing linear and nonlinear VAR models. We find that financial indicators significantly improve the accuracy of the distributions. Regimeswitching models perform better than linear models thanks to their ability to capture changes in the transmission mechanism of financial shocks between good and bad times. Such models could have sent a credible advance warning ahead of the Great Recession. Furthermore, the discrepancies between models are themselves predictable, which allows the forecaster to formulate reasonable realtime guesses on which model is likely to be more accurate in the next future. 
Keywords:  financial frictions, predictive densities, Great Recession, Threshold VAR 
JEL:  C53 E32 E44 G01 
Date:  2014–10 
URL:  http://d.repec.org/n?u=RePEc:bdi:wptemi:td_977_14&r=for 
By:  Christian Pierdzioch (Department of Economics, HelmutSchmidtUniversity); Monique B. Reid (Department of Economics, University of Stellenbosch); Rangan Gupta (Department of Economics, University of Pretoria) 
Abstract:  We study the directional accuracy of South African survey data of shortterm and longerterm inflation forecasts. Upon applying techniques developed for the study of relative operating characteristic (ROC) curves, we find evidence that forecasts contain information with respect to the subsequent direction of change of the inflation rate. 
Keywords:  inflation rate, forecasting, directional accuracy 
JEL:  C53 D82 E37 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:sza:wpaper:wpapers229&r=for 
By:  Alessandro Giovannelli (Università di Roma “Tor Vergata”); Tommaso Proietti (Università di Roma “Tor Vergata” and CREATES) 
Abstract:  We address the problem of selecting the common factors that are relevant for forecasting macroeconomic variables. In economic forecasting using diffusion indexes the factors are ordered, according to their importance, in terms of relative variability, and are the same for each variable to predict, i.e. the process of selecting the factors is not supervised by the predictand. We propose a simple and operational supervised method, based on selecting the factors on the basis of their significance in the regression of the predictand on the predictors. Given a potentially large number of predictors, we consider linear transformations obtained by principal components analysis. The orthogonality of the components implies that the standard tstatistics for the inclusion of a particular component are independent, and thus applying a selection procedure that takes into account the multiplicity of the hypotheses tests is both correct and computationally feasible. We focus on three main multiple testing procedures: Holm’s sequential method, controlling the family wise error rate, the BenjaminiHochberg method, controlling the false discovery rate, and a procedure for incorporating prior information on the ordering of the components, based on weighting the pvalues according to the eigenvalues associated to the components. We compare the empirical performances of these methods with the classical diffusion index (DI) approach proposed by Stock and Watson, conducting a pseudoreal time forecasting exercise, assessing the predictions of 8 macroeconomic variables using factors extracted from an U.S. dataset consisting of 121 quarterly time series. The overall conclusion is that nature is tricky, but essentially benign: the information that is relevant for prediction is effectively condensed by the first few factors. However, variable selection, leading to exclude some of the low order principal components, can lead to a sizable improvement in forecasting in specific cases. Only in one instance, real personal income, we were able to detect a significant contribution from high order components. 
Keywords:  Variable selection, Multiple testing, pvalue weighting 
JEL:  C22 C52 C58 
Date:  2014–11–25 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201446&r=for 
By:  Gabriele Ranco; Ilaria Bordino; Giacomo Bormetti; Guido Caldarelli; Fabrizio Lillo; Michele Treccani 
Abstract:  The new digital revolution of big data is deeply changing our capability of understanding society and forecasting the outcome of many social and economic systems. Unfortunately, information can be very heterogeneous in the importance, relevance, and surprise it conveys, affecting severely the predictive power of semantic and statistical methods. Here we show that the aggregation of web users' behavior can be elicited to overcome this problem in a hard to predict complex system, namely the financial market. Specifically, we show that the combined use of sentiment analysis of news and browsing activity of users of {\em Yahoo! Finance} allows to forecast intraday and daily price changes of a set of 100 highly capitalized US stocks traded in the period 20122013. Sentiment analysis or browsing activity when taken alone have very small or no predictive power. Conversely, when considering a {\it news signal} where in a given time interval we compute the average sentiment of the clicked news, weighted by the number of clicks, we show that for more the 50\% of the companies such signal Grangercauses price returns. Our result indicates a "wisdomofthecrowd" effect that allows to exploit users' activity to identify and weigh properly the relevant and surprising news, enhancing considerably the forecasting power of the news sentiment. 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1412.3948&r=for 
By:  Junior Ojeda (Departamento de Economía  Pontificia Universidad Católica del Perú); Gabriel Rodriguez (Departamento de Economía  Pontificia Universidad Católica del Perú) 
Abstract:  The literature has shown that the volatility of Stock and Forex rate market returns shows the characteristic of long memory. Another fact that is shown in the literature is that this feature may be spurious and volatility actually consists of a short memory process contaminated with random level shifts. In this paper, we follow the approach of Lu and Perron (2010) and Li and Perron (2013) estimating a model of random level shifts (RLS) to the logarithm of the absolute value of Stock and Forex returns. The model consists of the sum of a short term memory component and a component of level shifts. The second component is speciÖed as the cumulative sum of a process that is zero with probability 1alpha and is a random variable with probability alpha. The results show that there are level shifts that are rare but once they are taken into account, the characteristic or property of long memory disappears. Also, the presence of GARCH e§ects is eliminated when included or deducted level shifts. An exercise of outofsample forecasting shows that the RLS model has better performance than traditional models for modeling long memory such as the models ARFIMA (p,d,q). JEL ClassificationJEL: C22 
Keywords:  Returns, Volatility, Long Memory, Random Level Shifts, Kalman Filter, Forecasting 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00383&r=for 
By:  José Renato Haas Ornelas 
Abstract:  This paper empirically evaluates RiskNeutral Densities (RND) and RealWorld Densities (RWD) as predictors of future outcomes of emerging markets currencies. The dataset consists of volatility surfaces from 11 emerging market currencies, with approximately six years of daily data, using options with onemonth expiration. Therefore, there is a strong overlapping in data, which is tackled with specific econometric techniques. Results of the outofsample assessment show that both RND and RWD underweight the tails of the actual distribution. This is probably due to the lack of options with extreme strikes. Although the RWDs perform better than RND in terms of Kolmogorov distance, they still have problems in fitting the tails of actual data. Thus, the riskaversion adjustment may improve the forecast ability, but it does not solve the tails misfitting 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:bcb:wpaper:370&r=for 
By:  Joyce P. Jacobsen (Department of Economics, Wesleyan University); Laurence M. Levin (VISA); Zachary Tausanovitch (Network for Teaching Entrepreneurship) 
Abstract:  Economists’ wariness of data mining may be misplaced, even in cases where economic theory provides a wellspecified model for estimation. We discuss how new data mining/ensemble modeling software, for example the program TreeNet, can be used to create predictive models. We then show how for a standard labor economics problem, the estimation of wage equations, TreeNet outperforms standard OLS regression in terms of lower prediction error. Ensemble modeling also resists the tendency to overfit data. We conclude by considering additional types of economic problems that are wellsuited to use of data mining techniques. 
Keywords:  data mining, ensemble modeling 
JEL:  C14 C51 J31 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:wes:weswpa:2014003&r=for 