nep-for New Economics Papers
on Forecasting
Issue of 2014‒03‒22
seven papers chosen by
Rob J Hyndman
Monash University

  1. Analysis of forecast errors in micro-level survey data By Paloviita, Maritta; Viren, Matti
  2. Modeling volatility with Range-based Heterogeneous Autoregressive Conditional Heteroskedasticity model By Tomasz Skoczylas
  3. Time Series Analysis using Vector Auto Regressive (VAR) Model of Wind Speeds in Bangui Bay and Selected Weather Variables in Laoag City, Philippines By Orpia, Cherie; Mapa, Dennis S.; Orpia, Julius
  4. Strategic coordination in forecasting: An experimental study By Bizer, Kilian; Meub, Lukas; Proeger, Till; Spiwoks, Markus
  5. Analyzing and Forecasting Movements of the Philippine Economy using the Dynamic Factor Models (DFM) By Mapa, Dennis S.; Simbulan, Maria Christina
  6. Predictions vs. preliminary sample estimates: the case of eurozone quarterly GDP By Enrico D'Elia
  7. Predicting market instability: New dynamics between volume and volatility By Zeyu Zheng; Zhi Qiao; Joel N. Tenenbaum; H. Eugene Stanley; Baowen Li

  1. By: Paloviita, Maritta (Bank of Finland Research); Viren, Matti (Bank of Finland Research)
    Abstract: This paper studies forecasts errors at the micro level using two alternative survey data sets. The main focus is on inflation and real GDP growth forecasts in the ECB Survey of Professional Forecasters. For comparison, inflation forecasts in the US Survey of Professional Forecasters are also examined. Our analysis indicates that forecast errors are positively related to the subjective uncertainties based on probability distributions, but not to disagreement (standard deviation of point forecasts). We also show that forecast errors, which are rather persistent, are related to forecast revisions. Revisions of expectations generally lead to larger forecast errors. Subjective uncertainty measures, which are available at the time of forecasting, are useful in assessing future forecast errors.
    Keywords: forecasting; survey data; expectations
    JEL: C53 E31 E37
    Date: 2014–02–25
  2. By: Tomasz Skoczylas (Faculty of Economic Sciences, University of Warsaw)
    Abstract: In this paper a new ARCH-type volatility model is proposed. The Range-based Heterogeneous Autoregressive Conditional Heteroskedasticity (RHARCH) model draws inspiration from Heterogeneous Autoregressive Conditional Heteroskedasticity presented by Muller et al. (1995), but employs more efficient, range-based volatility estimators instead of simple squared returns in conditional variance equation. In the first part of this research range-based volatility estimators (such as Parkinson, or Garman-Klass estimators) are reviewed, followed by derivation of the RHARCH model. In the second part of this research the RHARCH model is compared with selected ARCH-type models with particular emphasis on forecasting accuracy. All models are estimated using data containing EURPLN spot rate quotation. Results show that RHARCH model often outperforms return-based models in terms of predictive abilities in both in-sample and out-of-sample periods. Also properties of standardized residuals are very encouraging in case of the RHARCH model.
    Keywords: volatility modelling, volatility forecasting, ARCH, range-based volatility estimators, heterogeneity of volatility
    JEL: C13 C22 C53
    Date: 2014
  3. By: Orpia, Cherie; Mapa, Dennis S.; Orpia, Julius
    Abstract: Wind energy is the fastest growing renewable energy technology. Wind turbines do not produce any form of pollution and when strategically placed, it naturally blends with the natural landscape. In the long run, the cost of electricity using wind turbines is cheaper than conventional power plants since it doesn’t consume fossil fuel. Wind speed modelling and forecasting are important in the wind energy industry starting from the feasibility stage to actual operation. Forecasting wind speed is vital in the decision-making process related to wind turbine sizes, revenues, maintenance scheduling and actual operational control systems. This paper models and forecasts wind speeds of turbines in the Northwind Bangui Bay wind farm using the Vector Auto Regressive (VAR) model. The explanatory variables used are local wind speed (Laoag), humidity, temperature and pressure generated from the meteorological station in Laoag City. Wind speeds of turbines and other weather factors were found to be stationary using Augmented Dickey-Fuller (ADF) test. The use of VAR model, from daily time series data, reveals that wind speeds of the turbines can be explained by the past wind speed, the wind speed in Laoag, humidity, temperature and pressure. Results of the analysis, using the forecast error variance decomposition, show that wind speed in Laoag, temperature and humidity are important determinants of the wind speeds of the turbines.
    Keywords: Wind speed, Vector Auto Regressive (VAR) Model, Variance Decomposition
    JEL: C3 C5 Q5
    Date: 2014–03
  4. By: Bizer, Kilian; Meub, Lukas; Proeger, Till; Spiwoks, Markus
    Abstract: Reputational herding has been considered as a driving force behind economic and financial forecasts clustered around consensus values. Strategic coordination can consequently explain poor performances of prediction markets as resulting from the distinct incentives that forecasters face. While this notion has been considered theoretically and empirically, the underlying behavioral working mechanisms have not yet been described. We thus put forth an exploratory experiment on the emergence and robustness of coordination in a forecasting setting implementing contradictory incentives for accurate forecasts and coordination. Forecasts are shown to be inaccurate and biased toward current values. This in turn has subjects aiming at coordination benefits. Predominantly, coordination is achieved through the risk-dominant equilibrium as the game proceeds. Once established, coordination is fairly stable and adds to overall welfare. Our results support the assumption of rational herding as a driving force for predictions of poor accuracy that are systematically biased towards focal points. --
    Keywords: coordination,incentives,laboratory experiment,reputational herding,sunspot equilibrium
    JEL: C90 D03 D83 G17
    Date: 2014
  5. By: Mapa, Dennis S.; Simbulan, Maria Christina
    Abstract: The country’s small and open economy is vulnerable to both internal and external shocks. Is it therefore important for policy makers to have timely forecasts on the movement of the country’s Gross Domestic Product (GDP), whether it will increase or decrease in the current quarter, to be able to guide them in coming up with appropriate policies to mitigate say, the impact of a shock. The current method used to forecast the movements of the GDP is the composite Leading Economic Indicators System (LEIS) developed by the National Economic Development Authority (NEDA) and the National Statistical Coordination Board (NSCB). The LEIS, using 11 economic indicators, provides one-quarter forecast of the movement of the GDP. This paper presents an alternative, and perhaps better, procedure to the LEIS in nowcasting the movements of the GDP using the Dynamic Factor Model (DFM). The idea behind the DFM is the stylized fact that economic movements evolve in a cycle and are correlated with co-movements in a large number of economic series. The DFM is a commonly used data reduction procedure that assumes economic shocks driving economic activity arise from unobserved components or factors. The DFM aims to parsimoniously summarize information from a large number of economic series to a small number of unobserved factors. The DFM assumes that co-movements of economic series can be captured using these unobserved common factors. This paper used 31 monthly economic indicators in capturing a common factor to nowcast movements of GDP via the DFM. The results show that the common factor produced by the DFM performed better in capturing the movements of the GDP when compared with the LEIS. The DFM is a promising and useful methodology in extracting indicators of the country’s economic activity.
    Keywords: Dynamic Factor Model, Leading Economic Indicators, Common Factor
    JEL: C4 E3 E61
    Date: 2014–03
  6. By: Enrico D'Elia
    Abstract: Economic agents are aware to incur in a loss basing their decisions on their own extrapolations instead of sound statistical data, but the loss could be smaller than the one related to waiting for the dissemination of final data. A broad guidance in deciding when statistical offices should release preliminary and final estimates of the key statistics may come from comparing the loss attached to users’ predictions to the loss associated to possible preliminary estimates from incomplete samples provides. Also the cost of delaying decisions for many economic agents may support the dissemination of very early estimates of economic indicators even if their accuracy is not fully satisfactory from a strict statistical viewpoint. Analysing the vintages of releases of quarterly Euro area GDP supports the view that even very inefficient predictions may beat some official preliminary releases of GDP, suggesting that the current calendar of data dissemination deserves some adjustment. In particular, actual “flash” estimates could be anticipated, while some later intermediate releases are likely less informative for the users.
    Keywords: Accuracy, data dissemination, eurozone GDP, forecast, preliminary estimates, timeliness
    JEL: C44 C49 C82 C83
    Date: 2014–03
  7. By: Zeyu Zheng; Zhi Qiao; Joel N. Tenenbaum; H. Eugene Stanley; Baowen Li
    Abstract: Econophysics and econometrics agree that there is a correlation between volume and volatility in a time series. Using empirical data and their distributions, we further investigate this correlation and discover new ways that volatility and volume interact, particularly when the levels of both are high. We find that the distribution of the volume-conditional volatility is well fit by a power-law function with an exponential cutoff. We find that the volume-conditional volatility distribution scales with volume, and collapses these distributions to a single curve. We exploit the characteristics of the volume-volatility scatter plot to find a strong correlation between logarithmic volume and a quantity we define as local maximum volatility (LMV), which indicates the largest volatility observed in a given range of trading volumes. This finding supports our empirical analysis showing that volume is an excellent predictor of the maximum value of volatility for both same-day and near-future time periods. We also use a joint conditional probability that includes both volatility and volume to demonstrate that invoking both allows us to better predict the largest next-day volatility than invoking either one alone.
    Date: 2014–03

This nep-for issue is ©2014 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.