nep-for New Economics Papers
on Forecasting
Issue of 2016‒04‒23
ten papers chosen by
Rob J Hyndman
Monash University

  1. Dynamic Factor Model with Infinite Dimensional Factor Space: Forecasting By Mario Forni; Alessandro Giovannelli; Marco Lippi; Stefano Soccorsi
  2. Dynamic Factor Models with Infinite-Dimensional Factor Space. Asymptotic Analysis By Mario Forni; Marc Hallin; Marco Lippi; Paolo Zaffaroni
  3. Comparing the market risk premia forecasts in JSE and NYSE equity markets By Leoni Eleni Oikonomikou
  4. Credit Funding and Banking Fragility: An Empirical Analysis for Emerging Economies By Alexander Guarín-López; Ignacio Lozano-Espitia
  5. Real-Time Forecasting for Monetary Policy Analysis: The Case of Sveriges Riksbank By Iversen, Jens; Laseen, Stefan; Lundvall, Henrik; Söderström, Ulf
  6. Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions By Tim Bollerslev; Andrew J. Patton; Rogier Quaedvlieg
  7. Deep neural networks, gradient-boosted trees, random forests: Statistical arbitrage on the S&P 500 By Krauss, Christopher; Do, Xuan Anh; Huck, Nicolas
  8. Bayesian Compressed Vector Autoregressions By Davide Pettenuzzo; Gary Koop; Dimitris Korobilis
  9. Is random forest a superior methodology for predicting poverty ? an empirical assessment By Pave Sohnesen,Thomas; Stender,Niels
  10. How Fast Can China Grow? The Middle Kingdom’s Prospects to 2030 By Jeannine Bailliu; Mark Kruger; Argyn Toktamyssov; Wheaton Welbourn

  1. By: Mario Forni; Alessandro Giovannelli; Marco Lippi; Stefano Soccorsi
    Abstract: Abstract. The paper compares the pseudo real-time forecasting performance of threeDynamic Factor Models: (i) The standard principal-component model, Stock and Watson(2002a), (ii) The model based on generalized principal components, Forni et al. (2005),(iii) The model recently proposed in Forni et al. (2015) and Forni et al. (2016). We employa large monthly dataset of macroeconomic and financial time series for the US economy,which includes the Great Moderation, the Great Recession and the subsequent recovery.Using a rolling window for estimation and prediction, we find that (iii) neatly outperforms(i) and (ii) in the Great Moderation period for both Industrial Production and Inflation,and for Inflation over the full sample. However, (iii) is outperfomed by (i) and (ii) over thefull sample for Industrial Production.
    Date: 2016–03
  2. By: Mario Forni (Università di Modena e Reggio Emilia, CEPR and RECent); Marc Hallin (ECARES, Université Libre de Bruxelles); Marco Lippi (EIEF); Paolo Zaffaroni (Imperial College London and Università di Roma La Sapienza)
    Abstract: Factor models, all particular cases of the Generalized Dynamic Factor Model (GDFM) introduced in Forni, Hallin, Lippi and Reichlin (2000), have become extremely popular in the theory and practice of large panels of time series data. The asymptotic properties (consistency and rates) of the corresponding estimators have been studied in Forni, Hallin, Lippi and Reichlin (2004). Those estimators, however, rely on Brillinger's dynamic principal components, and thus involve two-sided filters, which leads to rather poor forecasting performances. No such problem arises with estimators based on standard (static) principal components, which have been dominant in this literature. On the other hand, the consistency of those static estimators requires the assumption that the space spanned by the factors has finite dimension, which severely restricts the generality afforded by the GDFM. This paper derives the asymptotic properties of a semiparametric estimator of the loadings and common shocks based on one-sided filters recently proposed by Forni, Hallin, Lippi and Zaffaroni (2015). Consistency and exact rates of convergence are obtained for this estimator, under a general class of GDFMs that does not require a finite-dimensional factor space. A Monte Carlo experiment and an empirical exercise on US macroeconomic data corroborate those theoretical results and demonstrate the excellent performance of those estimators in out-of-sample forecasting.
    Date: 2016
  3. By: Leoni Eleni Oikonomikou (Georg-August University Göttingen)
    Abstract: This paper examines the evidence regarding predictability in the market risk premium using artificial neural networks (ANNs), namely the Elman Network (EN) and the Higher Order Neural network (HONN), univariate ARMA and exponential smoothing techniques, such as Single Exponential Smoothing (SES) and Exponentially Weighted Moving Average (EWMA). The contribution of this paper is the inclusion of the South African market risk premium to the forecasting exercise and its direct comparison with US forecasting results. The market risk premium is defined as the expected rate of return on the market portfolio in excess of the shortterm interest rate for each market. All data are taken from January 2007 till December 2014 on a daily basis. Elman networks provide superior results among the tested models in both insample and out-of sample periods as well as among the tested markets. In general, neural networks beat the naive benchmark model and achieve to perform better than the rest of their linear tested counterparts. The forecasting models successfully capture patterns in the data that improve the forecasting accuracy of the tested models. Therefore, they can be applied to trading and investment purposes.
    Keywords: forecasting performance; market risk premium; South African stock market; US stock market
    JEL: C45 C52 G15 G17
    Date: 2016–04–14
  4. By: Alexander Guarín-López; Ignacio Lozano-Espitia
    Abstract: This paper proposes an empirical model to identify and forecast banking fragility episodes using information on the credit funding sources. We predict the probability of occurrence of such episodes 0, 3 and 6 months ahead employing a Bayesian Model Averaging of logistic regressions. The exercises use monthly balance sheet data since the middle of the nineties for the banking system of nine merging economies: Brazil, Colombia, Croatia, Czech Republic, Mexico, Peru, Poland, Taiwan and Turkey. Our findings suggest that the increasing use of wholesale funds to support credit expansion provides warning signals of banking frailness. The in-sample and out-of-sample predictions indicate that the proposed technique is a suitable tool for forecasting short-term financial fragility events. Therefore, monitoring these funds through our tool could become useful in prudential practice.
    Keywords: credit cycle, financial stability, wholesale funds, balance sheet, logistic model regression, Bayesian model averaging
    JEL: C11 C52 C53 G01 G21
    Date: 2016–03–04
  5. By: Iversen, Jens; Laseen, Stefan; Lundvall, Henrik; Söderström, Ulf
    Abstract: We evaluate forecasts made in real time to support monetary policy decisions at Sveriges Riksbank (the central bank of Sweden) from 2007 to 2013. We compare forecasts made with a DSGE model and a BVAR model with judgemental forecasts published by the Riksbank, and we evaluate the usefulness of conditioning information for the model-based forecasts. We also study the perceived usefulness of model forecasts for central bank policymakers when producing the judgemental forecasts.
    Keywords: Forecast evaluation; Inflation targeting; Monetary policy; Real-time forecasting
    JEL: E37 E52
    Date: 2016–03
  6. By: Tim Bollerslev (Duke University, NBER and CREATES); Andrew J. Patton (Duke University); Rogier Quaedvlieg (Maastricht University)
    Abstract: We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the covariance forecasts, by allowing the ex-ante predictions to respond more (less) aggressively to changes in the ex-post realized covariance measures when they are more (less) reliable. Applying the new procedures in the construction of minimum variance and minimum tracking error portfolios results in reduced turnover and statistically superior positions compared to existing procedures. Translating these statistical improvements into economic gains, we find that under empirically realistic assumptions a risk-averse investor would be willing to pay up to 170 basis points per year to shift to using the new class of forecasting models.
    Keywords: Common risks; realized covariances; forecasting; asset allocation; portfolio construction
    JEL: C32 C58 G11 G32
    Date: 2016–04–05
  7. By: Krauss, Christopher; Do, Xuan Anh; Huck, Nicolas
    Abstract: In recent years, machine learning research has gained momentum: New developments in the field of deep learning allow for multiple levels of abstraction and are starting to supersede well-known and powerful tree-based techniques mainly operating on the original feature space. All these methods can be applied to various fields, including finance. This article implements and analyses the effectiveness of deep neural networks (DNN), gradient-boosted-trees (GBT), random forests (RAF), and a combination (ENS) of these methods in the context of statistical arbitrage. Each model is trained on lagged returns of all stocks in the S&P 500, after elimination of survivor bias. From 1992 to 2015, daily one-day-ahead trading signals are generated based on the probability forecast of a stock to outperform the general market. The highest k probabilities are converted into long and the lowest k probabilities into short positions, thus censoring the less certain middle part of the ranking. Empirical findings are promising. A simple ensemble consisting of one deep neural network, one gradient-boosted tree, and one random forest produces out-of-sample returns exceeding 0.45 percent per day for k = 10, prior to transaction costs. Irrespective of the fact that profits are declining in recent years, our findings pose a severe challenge to the semi-strong form of market efficiency.
    Keywords: statistical arbitrage,deep learning,gradient-boosting,random forests,ensemble learning
    Date: 2016
  8. By: Davide Pettenuzzo (Brandeis University); Gary Koop (NUniversity of Strathclyde); Dimitris Korobilis (University of Glasgow)
    Abstract: Macroeconomists are increasingly working with large Vector Autoregressions (VARs) where the number of parameters vastly exceeds the number of observations. Existing approaches either involve prior shrinkage or the use of factor methods. In this paper, we develop an alternative based on ideas from the compressed regression literature. It involves randomly compressing the explanatory variables prior to analysis. A huge dimensional problem is thus turned into a much smaller, more computationally tractable one. Bayesian model averaging can be done over various compressions, attaching greater weight to compressions which forecast well. In a macroeconomic application involving up to 129 variables, we find compressed VAR methods to forecast better than either factor methods or large VAR methods involving prior shrinkage.
    Keywords: multivariate time series, random projection, forecasting
    JEL: C11 C32 C53
    Date: 2016–03
  9. By: Pave Sohnesen,Thomas; Stender,Niels
    Abstract: Random forest is in many fields of research a common method for data driven predictions. Within economics and prediction of poverty, random forest is rarely used. Comparing out-of-sample predictions in surveys for same year in six countries shows that random forest is often more accurate than current common practice (multiple imputations with variables selected by stepwise and Lasso), suggesting that this method could contribute to better poverty predictions. However, none of the methods consistently provides accurate predictions of poverty over time, highlighting that technical model fitting by any method within a single year is not always, by itself, sufficient for accurate predictions of poverty over time.
    Keywords: Regional Economic Development,Statistical&Mathematical Sciences,Poverty Monitoring&Analysis,Rural Poverty Reduction
    Date: 2016–03–18
  10. By: Jeannine Bailliu; Mark Kruger; Argyn Toktamyssov; Wheaton Welbourn
    Abstract: Given its size and importance for global commodity markets, the question of how fast the Chinese economy can grow over the medium term is an important one. This paper addresses this question by examining the evolution of the supply side of the Chinese economy over history and projecting how it will evolve over the next 15 years. Using a Cobb-Douglas production function, we decompose the growth of trend GDP into those of the capital stock, labour, human capital and total factor productivity (TFP) and then forecast trend output growth out to 2030 using a bottom-up approach based on forecasts that we build for each one of these factors. Our paper distinguishes itself from existing work in that we construct a forecast of Chinese TFP growth based on the aggregation of forecasts of its key determinants. Moreover, our analysis is based on a carefully constructed estimate of the Chinese productive capital stock and a measure of human capital – based on Chinese wage survey data – that better reflects the returns to education in China. Our results suggest that Chinese trend output growth will decelerate from around 7% currently to about 5% by 2030, and are consistent with a gradual rebalancing of the Chinese economy characterized by a decline in the investment rate.
    Keywords: Development economics, International topics, Potential output, Productivity
    JEL: E32 E22 E23
    Date: 2016

This nep-for issue is ©2016 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.