nep-for New Economics Papers
on Forecasting
Issue of 2013‒09‒06
ten papers chosen by
Rob J Hyndman
Monash University

  1. Forecasting the US Real Private Residential Fixed Investment Using Large Number of Predictors By Goodness C. Aye; Rangan Gupta
  2. Forecasting key South African variables with a global VAR model By Annari de Waal; Renee van Eyden
  3. Extending Extended Logistic Regression to Effectively Utilize the Ensemble Spread By Jakob W. Messner; Georg J. Mayr; Achim Zeileis; Daniel S. Wilks
  4. Reverse Kalman Filtering US Inflation with Sticky Professional Forecasts By James M. Nason; Gregor W. Smith
  5. Adaptive Learning and Survey Data By Agnieszka Markiewicz; Andreas Pick
  6. Hermite Series Estimation in Nonlinear Cointegrating Models By Biqing Cai; Jiti Gao
  7. Inflation, unemployment, and labor force. Phillips curves and long-term projections for Japan By Kitov, Ivan; KItov, Oleg
  8. Tracking global fuel supply, CO2 emissions and sustainable development By Liam Wagner; Ian Ross; John Foster; Ben Hankamer
  9. Learning from the past, predicting the statistics for the future, learning an evolving system By Daniel Levin; Terry Lyons; Hao Ni
  10. A Semiparametric Approach to Value-at-Risk, Expected Shortfall and Optimum Asset Allocation in Stock-Bond Portfolios By Xiangjin B. Chen; Param Silvapulle; Mervyn Silvapulle

  1. By: Goodness C. Aye (Department of Economics, University of Pretoria); Rangan Gupta (Department of Economics, University of Pretoria)
    Abstract: This paper employs classical bivariate, factor augmented (FA), slab and spike variable selection (SSVS)-based, and Bayesian semiparametric shrinkage (BSS)-based predictive regression models to forecast the US real private residential fixed investment series over an out of sample period of 1983Q1 to 2011Q2, based on an in-sample of 1963Q1-1982Q4. Both large-scale (with 188 macroeconomic series) and small-scale (20 macroeconomic series) FA, SSVS and BSS predictive regressions, besides 20 bivariate regression models, are used in order to capture the influence of fundamentals in forecasting residential investment. We evaluate the ex post out-of-sample forecast performance of the 26 models using the relative average Mean Square Error for one-, two-, four- and eight-quarters-ahead forecasts and test their significance based on the McCracken (2004, 2007) MSE-F statistic. We find that, on average, the SSVS-Large model is the best amongst all the models. We also find that one of the individual regression models (based on house for sale as a predictor, H4SALE) performed best at the four- and eight-quarters-ahead horizons. Finally, we use these two models to predict the relevant turning points of the residential investment, via an ex ante forecast exercise from 2011Q3 to 2012Q4. The SSVS-Large model forecasts the turning points more accurately, though the H4SALE model did better towards the end of the sample. Our results suggest that it is best to consider economy-wide factors, in addition to specific housing market variables, when evaluating the real estate market.
    Keywords: Private residential investment, predictive regressions, factor-augmented models, Bayesian shrinkage, forecasting
    JEL: C32 E22 E27
    Date: 2013–08
  2. By: Annari de Waal (Department of Economics, University of Pretoria); Renee van Eyden (Department of Economics, University of Pretoria)
    Abstract: This study determines whether the global vector autoregressive (GVAR) approach provides better forecasts of key South African variables than a vector error correction model (VECM) augmented with foreign variables. The paper considers both a small GVAR model and a large GVAR model to determine the most appropriate GVAR for forecasting South African variables. We compare the recursive out-of-sample forecasts for South African GDP and inflation from five types of models: a general 33-country (large) GVAR, a customised small GVAR for South Africa, a VECM for South Africa with weakly exogenous foreign variables, autoregressive (AR) models and random walk models. The results show that the forecast performance of the large GVAR is generally superior to the performance of the customised small GVAR for South Africa. The forecasts of both the GVAR models tend to be better than the forecasts of the augmented VECM, especially at longer forecast horizons. We conclude that despite the complicated nature of the GVAR model with the inclusion of many countries and trade linkages, the additional information is useful for forecasting domestic variables for South Africa.
    Keywords: South Africa, global vector autoregressive (GVAR) model, forecasting
    JEL: C51 C53
    Date: 2013–08
  3. By: Jakob W. Messner; Georg J. Mayr; Achim Zeileis; Daniel S. Wilks
    Abstract: To achieve well calibrated probabilistic forecasts, ensemble forecasts often need to be statistically post-processed. One recent ensemble-calibration method is extended logistic regression which extends the popular logistic regression to yield full probability distribution forecasts. Although the purpose of this method is to post-process ensemble forecasts, mostly only the ensemble mean is used as predictor variable, whereas the ensemble spread is neglected because it does not improve the forecasts. In this study we show that when simply used as ordinary predictor variable in extended logistic regression, the ensemble spread only affects the location but not the variance of the predictive distribution. Uncertainty information contained in the ensemble spread is therefore not utilized appropriately. To solve this drawback we propose a simple new approach where the ensemble spread is directly used to predict the dispersion of the predictive distribution. With wind speed data and ensemble forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF) we show that using this approach, the ensemble spread can be used effectively to improve forecasts from extended logistic regression.
    Keywords: probabilistic forecasting, extended logistic regression, heteroskedasticity, ensemble spread
    JEL: C53 C25 Q42
    Date: 2013–08
  4. By: James M. Nason (Federal Reserve Bank of Philadelphia); Gregor W. Smith (Queen's University)
    Abstract: We provide a new way to filter US inflation into trend and cycle components, based on extracting long-run forecasts from the Survey of Professional Forecasters. We operate the Kalman filter in reverse, beginning with observed forecasts, then estimating parameters, and then extracting the stochastic trend in inflation. The trend-cycle model with unobserved components is consistent with numerous studies of US inflation history and is of interest partly because the trend may be viewed as the Fed’s evolving inflation target or long-horizon expected inflation. The sluggish reporting attributed to forecasters is consistent with evidence on mean forecast errors. We find considerable evidence of inflation-gap persistence and some evidence of implicit sticky information. But statistical tests show we cannot reconcile these two widely used perspectives on US inflation forecasts, the unobserved-components model and the sticky-information model.
    Keywords: US inflation, professional forecasts, sticky information, Beveridge-Nelson
    JEL: E31 E37
    Date: 2013–09
  5. By: Agnieszka Markiewicz (Erasmus University Rotterdam); Andreas Pick (Erasmus University Rotterdam and De Nederlandsche Bank)
    Abstract: This paper investigates the ability of the adaptive learning approach to replicate the expectations of professional forecasters. For a range of macroeconomic and financial variables, we compare constant and decreasing gain learning models to simple, yet powerful benchmark models. We find that both, constant and decreasing gain models, provide a good fit for the expectations of professional forecasters for a range of variables. These results suggest that, instead of relying only on the the most recent observation, agents use more complex models to form their expectations even for financial variables where random walk forecasts are often difficult to beat.
    Keywords: expectations, survey of professional forecasters, adaptive learning, bounded rationality
    JEL: E37 E44 G14 G15
    Date: 2013–02–28
  6. By: Biqing Cai; Jiti Gao
    Abstract: This paper discusses nonparametric series estimation of integrable cointegration models using Hermite functions. We establish the uniform consistency and asymptotic normality of the series estimator. The Monte Carlo simulation results show that the performance of the estimator is numerically satisfactory. We then apply the estimator to estimate the stock return predictive function. The out-of-sample evaluation results suggest that dividend yield has nonlinear predictive power for stock returns while book-to-market ratio and earning-price ratio have little predictive power.
    Keywords: Cointegration, Hermite Functions, Return Predictability, Series Estimator, Unit Root
    Date: 2013
  7. By: Kitov, Ivan; KItov, Oleg
    Abstract: The evolution of the rate of price inflation, (t), and unemployment, u(t), in Japan has been modeled within the Phillips curve framework. As an extension to the Phillips curve, we represent both variables as linear functions of the change rate of labor force. All models were first estimated in 2005 for the period between 1980 and 2003. Here we update these original models with data through 2012. The revisited models accurately describe disinflation during the 1980s and 1990s as well as the whole deflationary period started in the late 1990s. The Phillips curve for Japan confirms the original concept that growing unemployment results in decreasing inflation. A linear and lagged generalized Phillips curve expressed as a link between inflation, unemployment, and labor force has been also re-estimated and validated by new data. Labor force projections allow a long-term inflation and unemployment forecast: the GDP deflator will be negative (between -0.5% and -2% per year) during the next 40 years. The rate fo unemployment will increase from ~4.3% in 2012 to ~5.5% in 2050.
    Keywords: inflation, unemployment, labor force, Phillips curve, forecasting, Japan
    JEL: E3 E37 E5 E52
    Date: 2013–08–30
  8. By: Liam Wagner (Department of Economics, University of Queensland); Ian Ross (IMB, University of Queensland); John Foster (Department of Economics, University of Queensland); Ben Hankamer (IMB, University of Queensland)
    Abstract: Reducing CO2 emissions is imperative to stay within the 2oC global warming ‘safe limit’ of the Intergovernmental Panel on Climate Change. However to ensure social and political stability, these reductions must be aligned with fuel security and economic growth. Here an advanced multifactorial model is used to forecast global energy demand, based on global population, current energy use and economic growth rates allowing a critical analysis of global energy use patterns. A severe upward pressure on global energy demand results from the combined interplay of increasing population and continuing economic growth. The predictive output highlights (i) the potential for an exponential increase of fuel consumption (ii) serious fossil fuel limitations from 2033 onward, (iii) implications for CO2 emission reduction in a ‘pro-growth’ global economy and (iv) poverty alleviation. These findings place economists and environmentalists on the same side and establish a reference to guide sustainable development.
    Keywords: Energy Demand; Fossil Fuels; Economic Growth; Climate Change; Equilibrium correction Model; Time Series;
    JEL: Q41 Q32 Q43 C53 O13 O44
    Date: 2013–08
  9. By: Daniel Levin; Terry Lyons; Hao Ni
    Abstract: Regression analysis aims to use observational data from multiple observations to develop a functional relationship relating explanatory variables to response variables, which is important for much of modern statistics, and econometrics, and also the field of machine learning. In this paper, we consider the special case where the explanatory variable is a stream of information, and the response is also potentially a stream. We provide an approach based on identifying carefully chosen features of the stream which allows linear regression to be used to characterise the functional relationship between explanatory variables and the conditional distribution of the response; the methods used to develop and justify this approach, such as the signature of a stream and the shuffle product of tensors, are standard tools in the theory of rough paths and seem appropriate in this context of regression as well and provide a surprisingly unified and non-parametric approach. To illustrate the approach we consider the problem of using data to predict the conditional distribution of the near future of a stationary, ergodic time series and compare it with probabilistic approaches based on first fitting a model. We believe our reduction of this regression problem for streams to a linear problem is clean, systematic, and efficient in minimizing the effective dimensionality. The clear gradation of finite dimensional approximations increases its usefulness. Although the approach is non-parametric, it presents itself in computationally tractable and flexible restricted forms in examples we considered. Popular techniques in time series analysis such as AR, ARCH and GARCH can be seen to be special cases of our approach, but it is not clear if they are always the best or most informative choices.
    Date: 2013–09
  10. By: Xiangjin B. Chen; Param Silvapulle; Mervyn Silvapulle
    Abstract: This paper investigates stock-bond portfolios’ tail risks such as value-at-risk (VaR) and expected shortfall (ES), and the way in which these measures have been affected by the global financial crisis. The semiparametric t-copula is found to be adequate for modelling stock-bond joint distributions of G7 countries and Australia. Empirical results show that weak (negative) dependence has increased for seven countries after the crisis, while it has decreased for Italy. However, both VaR and ES have increased for all eight countries. Before the crisis, the minimum portfolio VaR and ES were achieved at an interior solution only for the US, the UK, Australia, Canada and Italy. After the crisis, the corner solution was found for all eight countries. Evidence of “flight to quality†and “safety first†investor behaviour was found to be strong, after the global financial crisis. The semiparametric t-copula adequately forecasts the outer-sample VaR. These findings have implications for global financial regulators and the Basel Committee, whose central focus is currently on increasing the capital requirements as a consequence of the recent global financial crisis.
    Keywords: Copula; Semiparametric method; Value-at-Risk; Investment decision
    Date: 2013

This nep-for issue is ©2013 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.