nep-for New Economics Papers
on Forecasting
Issue of 2012‒12‒10
nine papers chosen by
Rob J Hyndman
Monash University

  1. Forecasting Covariance Matrices: A Mixed Frequency Approach By Roxana Halbleib; Valeri Voev
  2. Short-term forecasts of French GDP: a dynamic factor model with targeted predictors. By Bessec, M.
  3. Are CDS spreads predictable? An analysis of linear and non-linear forecasting models By Avino, Davide; Nneji, Ogonna
  4. Does Bayesian Shrinkage Help to Better Reflect What Happened during the Subprime Crisis? By Olfa Kaabia; Ilyes Abid; Khaled Guesmi
  5. Modeling Movements in Oil, Gold, Forex and Market Indices using Search Volume Index and Twitter Sentiments By Tushar Rao; Saket Srivastava
  6. Tails of Inflation Forecasts and Tales of Monetary Policy By Andrade, P.; Ghysels, E.; Idier, J.
  7. Performance of a reciprocity model in predicting a positive reciprocity decision By Bhirombhakdi, Kornpob; Potipiti, Tanapong
  8. Modelling Primary Energy Consumption under Model Uncertainty By Zsuzsanna Csereklyei; Stefan Humer
  9. Regime-specific predictability in predictive regressions. By Gonzalo, Jesús; Pitarakis, Jean-Yves

  1. By: Roxana Halbleib (Department of Economics, University of Konstanz, Germany); Valeri Voev (School of Economics and Management, Aarhus University, Denmark)
    Abstract: In this paper we introduce a new method of forecasting covariance matrices of large dimensions by exploiting the theoretical and empirical potential of using mixed-frequency sampled data. The idea is to use high-frequency (intraday) data to model and forecast daily realized volatilities combined with low frequency (daily) data as input to the correlation model. The main theoretical contribution of the paper is to derive statistical and economic conditions, which ensure that a mixed-frequency forecast has a smaller mean squared forecast error than a similar pure low-frequency or pure high-frequency specification. The conditions are very general and do not rely on distributional assumptions of the forecasting errors or on a particular model specification. Moreover, we provide empirical evidence that, besides overcoming the computational burden of pure high-frequency specifications, the mixed-frequency forecasts are particularly useful in turbulent financial periods, such as the previous financial crisis and always outperforms the pure low-frequency specifications.
    Keywords: Multivariate volatility, Volatility forecasting, High-frequency data, Realized variance, Realized covariance
    JEL: C32 C53
    Date: 2012–10–12
  2. By: Bessec, M.
    Abstract: In recent years, factor models have received increasing attention from both econometricians and practitioners in the forecasting of macroeconomic variables. In this context, Bai and Ng (2008) find an improvement in selecting indicators according to the forecast variable prior to factor estimation (targeted predictors). In particular, they propose using the LARS-EN algorithm to remove irrelevant predictors. In this paper, we adapt the Bai and Ng procedure to a setup in which data releases are delayed and staggered. In the pre-selection step, we replace actual data with estimates obtained on the basis of past information, where the structure of the available information replicates the one a forecaster would face in real time. We estimate on the reduced dataset the dynamic factor model of Giannone, Reichlin and Small (2008) and Doz, Giannone and Reichlin (2011), which is particularly suitable for the very short-term forecast of GDP. A pseudo real-time evaluation on French data shows the potential of our approach.
    Keywords: GDP forecasting, factor models, variable selection, targeted predictors.
    JEL: C22 E32 E37
    Date: 2012
  3. By: Avino, Davide; Nneji, Ogonna
    Abstract: This paper investigates the forecasting performance for CDS spreads of both linear and non-linear models by analysing the iTraxx Europe index during the financial crisis period which began in mid-2007. The statistical and economic significance of the models’ forecasts are evaluated by employing various metrics and trading strategies, respectively. Although these models provide good in-sample performances, we find that the non-linear Markov switching models underperform linear models out-of-sample. In general, our results show some evidence of predictability of iTraxx index spreads. Linear models, in particular, generate positive Sharpe ratios for some of the strategies implemented, thus shedding some doubts on the efficiency of the European CDS index market.
    Keywords: Credit default swap spreads; iTraxx; Forecasting; Markov switching; Market efficiency; Technical trading rules
    JEL: C22 G20 G01
    Date: 2012–11–23
  4. By: Olfa Kaabia; Ilyes Abid; Khaled Guesmi
    Abstract: We study the contagion effects of a U.S. housing shock on OECD countries over the period of the subprime crisis. Considering a large database containing national macroeconomic, financial, and trade dynamic variables for 17 OECD countries, we evaluate forecasting accuracy, and perform a structural analysis exercise using VAR models of different sizes: a standard VAR estimated by OLS and a MEDIUM and LARGE VARs estimated by a Bayesian shrinkage procedure. Our main findings are that: First, the largest specification outperforms the smallest one in terms of forecast accuracy. Second, the MEDIUM VAR outperforms both the LARGE BVAR and the SMALL VAR in the case of structural analysis. So the MEDIUM VAR is sufficient to provide plausible impulse responses, and reproduce more realistically what happened during the subprime crisis. Third, the Bayesian shrinkage procedure is preferable to the standard OLS estimation in the case of an international contagion study.
    Keywords: Contagion, subprime crisis, OECD housing markets, VAR/ BVAR models and Bayesian shrinkage
    JEL: F47 C11 C32
    Date: 2012
  5. By: Tushar Rao (NSIT-Delhi); Saket Srivastava (IIIT-Delhi)
    Abstract: Study of the forecasting models using large scale microblog discussions and the search behavior data can provide a good insight for better understanding the market movements. In this work we collected a dataset of 2 million tweets and search volume index (SVI from Google) for a period of June 2010 to September 2011. We perform a study over a set of comprehensive causative relationships and developed a unified approach to a model for various market securities like equity (Dow Jones Industrial Average-DJIA and NASDAQ-100), commodity markets (oil and gold) and Euro Forex rates. We also investigate the lagged and statistically causative relations of Twitter sentiments developed during active trading days and market inactive days in combination with the search behavior of public before any change in the prices/ indices. Our results show extent of lagged significance with high correlation value upto 0.82 between search volumes and gold price in USD. We find weekly accuracy in direction (up and down prediction) uptil 94.3% for DJIA and 90% for NASDAQ-100 with significant reduction in mean average percentage error for all the forecasting models.
    Date: 2012–12
  6. By: Andrade, P.; Ghysels, E.; Idier, J.
    Abstract: We introduce a new measure called Inflation-at-Risk (I@R) associated with (left and right) tail inflation risk. We estimate I@R using survey-based density forecasts. We show that it contains information not covered by usual inflation risk indicators which focus on inflation uncertainty and do not distinguish between the risks of low or high future inflation outcomes. Not only the extent but also the asymmetry of inflation risks evolve over time. Moreover, changes in this asymmetry have an impact on future inflation realizations as well as on the current interest rate central banks target.
    Keywords: inflation expectations, risk, uncertainty, survey data, inflation dynamics, monetary policy.
    JEL: E31 E37 E43 E52
    Date: 2012
  7. By: Bhirombhakdi, Kornpob; Potipiti, Tanapong
    Abstract: This study experimentally tests the performance in predicting decisions of a reciprocity model that was proposed by Dufwenberg et al. (2004). By applying a new approach, the study directly and individually predicts a subject's future decision from his past decision. The prediction performance is measured by the rate of correct predictions (accuracy) and the gain in the rate of the correct predictions (informativeness). Six scenarios of trust game are used to test the model's performance. Further, we compare the performance of the model with two other prediction methods; one method uses a decision in a dictator game to predict a decision in a trust game; the other uses personal information including IQ-test scores, personal attitudes and socio-economic factors. Seventy-nine undergraduate students participated in this hand-run experimental study. The results show that the reciprocity model has the best performance when compared with other prediction methods.
    Keywords: Reciprocity; Kindness; Performance; Trust Game
    JEL: C71 C91
    Date: 2012–10–29
  8. By: Zsuzsanna Csereklyei (Department of Economics, Vienna University of Economics and Business); Stefan Humer (Department of Economics, Vienna University of Economics and Business)
    Abstract: This paper examines the long-term relationship between primary energy consumption and other key macroeconomic variables, including real GDP, labour force, capital stock and technology, using a panel dataset for 64 countries over the period 1965-2009. Deploying panel error correction models, we find that there is a positive relationship running from physical capital, GDP, and population to primary energy consumption. We observe however a negative relationship between total factor productivity and primary energy usage. Significant differences arise in the magnitude of the cointegration coefficients, when we allow for differences in geopolitics and wealth levels. We also argue that inference on the basis of a single model without taking model uncertainty into account can lead to biased conclusions. Consequently, we address this problem by applying simple model averaging techniques to the estimated panel cointegration models. We find that tackling the uncertainty associated with selecting a single model with model averaging techniques leads to a more accurate representation of the link between energy consumption and the other macroeconomic variables, and to a significantly increased out-of-sample forecast performance.
    Keywords: Energy Consumption; Panel Cointegration Models; Model Averaging
    JEL: C33 C52 Q41 Q43
    Date: 2012–11
  9. By: Gonzalo, Jesús; Pitarakis, Jean-Yves
    Abstract: Predictive regressions are linear specifications linking a noisy variable such as stock returns to past values of a very persistent regressor with the aim of assessing the presence of predictability. Key complications that arise are the potential presence of endogeneity and the poor adequacy of asymptotic approximations. In this article, we develop tests for uncovering the presence of predictability in such models when the strength or direction of predictability may alternate across different economically meaningful episodes. An empirical application reconsiders the dividend yield-based return predictability and documents a strong predictability that is countercyclical, occurring solely during bad economic times. This article has online supplementary materials.
    Keywords: Endogeneity; Persistence; Return predictability; Threshold models;
    Date: 2012–05–24

This nep-for issue is ©2012 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.