nep-for New Economics Papers
on Forecasting
Issue of 2014‒12‒03
eight papers chosen by
Rob J Hyndman
Monash University

  1. Composite Qualitative Forecasting of Futures Prices: Using One Commodity to Help Forecast Another By Li, Anzhi; Dorfman, Jeffrey H.
  2. Forecasting Aggregates with Disaggregate Variables: Does Boosting Help to Select the Most Relevant Predictors? By Jing Zeng
  3. Comparing variable selection techniques for linear regression: LASSO and Autometrics By Camila Epprecht; Dominique Guegan; Álvaro Veiga
  4. Forecasting risk via realized GARCH, incorporating the realized range By Chao, Wang; Richard, Gerlach
  5. On the modelling and forecasting multivariate realized volatility: Generalized Heterogeneous Autoregressive (GHAR) model By Jozef Baruník; František Èech
  6. The VIX, the variance premium and stock market volatility By Bekaert, Geert; Hoerova, Marie
  7. Real-Time Nowcasting of Nominal GDP Under Structural Breaks By William A. Barnett; Marcelle Chauvet; Danilo Leiva-Leon
  8. Deterministic and stochastic trends in the Lee-Carter mortality model By Laurent Callot; Niels Haldrup; Malene Kallestrup Lamb

  1. By: Li, Anzhi; Dorfman, Jeffrey H.
    Abstract: Managers of businesses that involve agricultural commodities need price forecasts in order to manage the risk in either the sale or purchase of agricultural commodities. Sometimes the most important forecasting component is simply whether the price will move up or down. Such binary forecasts are commonly referred to as qualitative forecasts. This paper examines whether qualitative forecasting of commodity prices can be improved by the inclusion within the model specification of price forecasts for other commodities. We use hog prices as a test case and find strong support for the inclusion of other commodity price forecasts in the best forecasting models. Unfortunately, the out-of-sample performance of these models is mixed at best. Still, the results suggest qualitative forecasts can be improved through the inclusion of other commodity price forecasts in our models.
    Keywords: qualitative forecasting, model specification, Bayesian econometrics, Agribusiness, Demand and Price Analysis, Research Methods/ Statistical Methods,
    Date: 2014
  2. By: Jing Zeng (Department of Economics, University of Konstanz, Germany)
    Abstract: Including disaggregate variables or using information extracted from the disaggregate variables into a forecasting model for an economic aggregate may improve the forecasting accuracy. In this paper we suggest to use the boosting method to select the disaggregate variables which are most helpful in predicting an aggregate of interest. We conduct a simulation study to investigate the variable selection ability of this method. To assess the forecasting performance a recursive pseudo-out-of-sample forecasting experiment for six key Euro area macroeconomic variables is conducted. The results suggest that using boosting to select relevant predictors is a feasible and competitive approach in forecasting an aggregate.
    Keywords: aggregation, macroeconomic forecasting, componentwise boosting, factor analysis
    JEL: C22 C43 C52 C53 C82
    Date: 2014–09–23
  3. By: Camila Epprecht (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon-Sorbonne, Pontifical Catholic University of Rio de Janeiro - Department of Electrical Engineering); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon-Sorbonne); Álvaro Veiga (Pontifical Catholic University of Rio de Janeiro - Department of Electrical Engineering)
    Abstract: In this paper, we compare two different variable selection approaches for linear regression models: Autometrics (automatic general-to-specific selection) and LASSO (ℓ1-norm regularization). In a simulation study, we show the performance of the methods considering the predictive power (forecast out-of-sample) and the selection of the correct model and estimation (in-sample). The case where the number of candidate variables exceeds the number of observation is considered as well. We also analyze the properties of estimators comparing to the oracle estimator. Finally, we compare both methods in an application to GDP forecasting.
    Keywords: Model selection; variable selection; GETS; Autometrics; LASSO; adaptive LASSO; sparse models; oracle property; time series; GDP forecasting
    Date: 2013–11
  4. By: Chao, Wang; Richard, Gerlach
    Abstract: The realized GARCH framework is extended to incorporate the realized range, and the intra-day range, as potentially more efficient series of information than re- alized variance or daily returns, for the purpose of volatility and tail risk forecasting in a financial time series. A Bayesian adaptive Markov chain Monte Carlo method is employed for estimation and forecasting. Compared to a range of well known parametric GARCH models, predictive log-likelihood results across six market in- dex return series favor the realized GARCH models incorporating the realized range. Further, these same models also compare favourably for tail risk forecasting, both during and after the global financial crisis.
    Keywords: Tail Risk Forecasting; Predictive Likelihood; Realized GARCH; Realized Variance; Intra-day Range; Realized Range
    Date: 2014–11–07
  5. By: Jozef Baruník (Institute of Economic Studies, Charles University, Opletalova 26, 110 00, Prague, CR and Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Pod Vodarenskou Vezi 4, 182 00, Prague, Czech Republic.); František Èech (Institute of Economic Studies, Charles University, Opletalova 26, 110 00, Prague, CR and Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Pod Vodarenskou Vezi 4, 182 00, Prague, Czech Republic.)
    Abstract: We introduce a methodology for dynamic modelling and forecasting of realized covariance matrices based on generalization of the heterogeneous autoregressive model (HAR) for realized volatility. Multivariate extensions of popular HAR framework leave substantial information unmodeled in residuals. We propose to employ a system of seemingly unrelated regressions to capture the information. The newly proposed generalized heterogeneous autoregressive (GHAR) model is tested against natural competing models. In order to show the economic and statistical gains of the GHAR model, portfolio of various sizes is used. We find that our modeling strategy outperforms competing approaches in terms of statistical precision, and provides economic gains in terms of mean-variance trade-o . Additionally, our results provide a comprehensive comparison of the performance when realized covariance and more ecient, noise-robust multivariate realized kernel estimator, is used. We study the contribution of both estimators across di erent sampling frequencies, and we show that the multivariate realized kernel estimator delivers further gains compared to realized covariance estimated on higher frequencies.
    Keywords: GHAR, portfolio optimisation, economic evaluation
    JEL: C18 C58 G15
    Date: 2014–08
  6. By: Bekaert, Geert; Hoerova, Marie
    Abstract: We decompose the squared VIX index, derived from US S&P500; options prices, into the conditional variance of stock returns and the equity variance premium. We evaluate a plethora of state-of-the-art volatility forecasting models to produce an accurate measure of the conditional variance. We then examine the predictive power of the VIX and its two components for stock market returns, economic activity and financial instability. The variance premium predicts stock returns while the conditional stock market variance predicts economic activity and has a relatively higher predictive power for financial instability than does the variance premium. JEL Classification: C22, C52, G12, E32
    Keywords: economic uncertainty, financial instability, option implied volatility, realized volatility, risk aversion, risk-return trade-off, stock return predictability, variance risk premium, VIX
    Date: 2014–05
  7. By: William A. Barnett; Marcelle Chauvet; Danilo Leiva-Leon
    Abstract: This paper provides a framework for the early assessment of current U.S. nominal GDP growth, which has been considered a potential new monetary policy target. The nowcasts are computed using the exact amount of information that policy-makers have available at the time predictions are made. However, real-time information arrives at different frequencies and asynchronously, which poses challenges of mixed frequencies, missing data and ragged edges. This paper proposes a multivariate state-space model that not only takes into account asynchronous information inflow, but also allows for potential parameter instability. We use small-scale confirmatory factor analysis in which the candidate variables are selected based on their ability to forecast nominal GDP. The model is fully estimated in one step using a non-linear Kalman filter, which is applied to obtain optimal inferences simultaneously on both the dynamic factor and parameters. In contrast to principal component analysis, the proposed factor model captures the co-movement rather than the variance underlying the variables. We compare the predictive ability of the model with other univariate and multivariate specifications. The results indicate that the proposed model containing information on real economic activity, inflation, interest rates and Divisia monetary aggregates produces the most accurate real-time nowcasts of nominal GDP growth.
    Keywords: Business fluctuations and cycles, Econometric and statistical methods, Inflation and prices
    JEL: C32 E27 E31 E32
    Date: 2014
  8. By: Laurent Callot (VU University Amsterdam, the Tinbergen Institute and CREATES); Niels Haldrup (Aarhus University and CREATES); Malene Kallestrup Lamb (Aarhus University and CREATES)
    Abstract: The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics loads with identical weights when describing the development of age specific mortality rates. Effectively this means that the main characteristics of the model simplifies to a random walk model with age specific drift components. But restricting the adjustment mechanism of the stochastic and linear trend components to be identical may be a too strong simplification. In fact, the presence of a stochastic trend component may itself result from a bias induced by properly fitting the linear trend that characterizes mortality data. We find empirical evidence that this feature of the Lee-Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find that the classical Lee-Carter model will otherwise over estimate the reduction of mortality for the younger age groups and will under estimate the reduction of mortality for the older age groups. In practice, our recommendation means that the Lee-Carter model instead of a one-factor model should be formulated as a two (or several)-factor model where one factor is deterministic and the other factors are stochastic. This feature generalizes to the range of models that extend the Lee-Carter model in various directions.
    Keywords: Mortality modelling, factor models, principal components, stochastic and deterministic trends
    JEL: C2 C23 J1 J11
    Date: 2014–11–19

This nep-for issue is ©2014 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.