nep-for New Economics Papers
on Forecasting
Issue of 2013‒06‒30
eleven papers chosen by
Rob J Hyndman
Monash University

  1. Nowcasting French GDP in Real-Time from Survey Opinions: Information or Forecast Combinations? By Bec, F.; Mogliani, M.
  2. Using Newspapers for Tracking the Business Cycle: A comparative study for Germany and Switzerland By David Iselin; Boriss Siliverstovs
  3. Outperforming the naïve Random Walk forecast of foreign exchange daily closing prices using Variance Gamma and normal inverse Gaussian Levy processes By Teneng, Dean
  4. Moving Average Stochastic Volatility Models with Application to Inflation Forecast By Joshua C.C. Chan
  5. Dynamic mixture-of-experts models for longitudinal and discrete-time survival data By Quiroz, Matias; Villani, Mattias
  6. Bayesian bandwidth selection for a nonparametric regession model with mixed types of regressors By Xibin Zhang; Maxwell L. King; Han Lin Shang
  7. Vector Autoregression with Mixed Frequency Data By Qian, Hang
  8. Inference and forecasting in the age-period-cohort model with unknown exposure with an application to mesothelioma mortality By Neil Shephard
  9. The Structure of Consumer Taste Heterogeneity in Revealed vs. Stated Preference Data By Michael P. Keane; Nada Wasi
  10. The Future of Global Poverty in a Multi-Speed World: New Estimates of Scale and Location, 2010–2030 By Peter Edward; Andy Sumner
  11. A political economy model of the vertical fiscal gap and vertical fiscal imbalances in a federation By Bev Dahlby; Jonathan Rodden

  1. By: Bec, F.; Mogliani, M.
    Abstract: This paper investigates the predictive accuracy of two alternative forecasting strategies, namely the forecast and information combinations. Theoretically, there should be no role for forecast combinations in a world where information sets can be instantaneously and costlessly combined. However, following some recent works which claim that this result holds in population but not necessarily in small samples, our paper questions this postulate empirically in a real-time and mixed-frequency framework. An application to the quarterly growth rate of French GDP reveals that, given a set of predictive models involving coincident indicators, a simple average of individual forecasts outperforms the individual forecasts, as long as no individual model encompasses the others. Furthermore, the simple average of individual forecasts outperforms, or it is statistically equivalent to, more sophisticated forecast combination schemes. However, when a predictive encompassing model is obtained by combining information sets, this model outperforms the most accurate forecast combination strategy.
    Keywords: Forecast Combinations, Pooling Information, Macroeconomic Nowcasting, Real-time data, Mixed-frequency data.
    JEL: C22 C52 C53 E37
    Date: 2013
  2. By: David Iselin (KOF Swiss Economic Institute, ETH Zurich, Switzerland); Boriss Siliverstovs (KOF Swiss Economic Institute, ETH Zurich, Switzerland)
    Abstract: On the basis of keyword searches in newspaper articles several versions of the Recession-word Index (RWI) are constructed for Germany and Switzerland. We use these indices in order to track the business cycle dynamics in these two countries. Our main findings are the following. First, we show that augmenting benchmark autoregressive models with the RWI generally leads to improvement in accuracy of one-step ahead forecasts of GDP growth compared to those obtained by the benchmark model. Second, the accuracy of out-of-sample forecasts obtained with models augmented with the RWI is comparable to that of models augmented with established economic indicators in both countries, such as the Ifo Business Climate Index and the ZEW Indicator of Economic Sentiment for Germany, and the KOF Economic Barometer and the Purchasing Managers Index in manufacturing for Switzerland. Third, we show that the RWI-based forecasts are more accurate than the consensus forecasts (published by Consensus Economics Inc.) for Switzerland, whereas we reach the opposite conclusion for Germany. In fact, the accuracy of the consensus forecasts of GDP growth for Germany appears to be superior to that of any other indicator considered in our study. These results are robust to changes in estimation/forecast samples, the use of rolling vs expanding estimation windows, and the inclusion of a web-based recession indicator extracted from Google Trends into a set of the competing models.
    Keywords: Nowcasting, Recession, R-word Index, Google Trends
    JEL: C22 C53
    Date: 2013–06
  3. By: Teneng, Dean
    Abstract: This work demonstrates that forecast of foreign exchange (FX) daily closing prices using the normal inverse Gaussian (NIG) and Variance Gamma (VG) Levy processes outperform the naïve Random Walk model. We use the open software R to estimate NIG and VG distribution parameters and perform several classical goodness–of -fits test to select best models. Seven currency pairs can be forecasted by both Levy processes: TND/GBP, EGP/EUR, EUR/GBP, EUR/JPY, JOD/JPY, USD/GBP, and XAU/USD, while USD/JPY and QAR/JPY can be forecasted with the VG process only. RMSE values show that NIG and VG forecast are comparable, and both outperform the naïve Random Walk out of sample. Appended R-codes are original.
    Keywords: Levy process, NIG, VG, forecasting, goodness of fits, foreign exchange, Random Walk model
    JEL: C44 C52 C53
    Date: 2013–06–26
  4. By: Joshua C.C. Chan
    Abstract: We introduce a new class of models that has both stochastic volatility and moving average errors, where the conditional mean has a state space representation. Having a moving average component, however, means that the errors in the measurement equation are no longer serially independent, and estimation becomes more difficult. We develop a posterior simulator that builds upon recent advances in precision-based algorithms for estimating these new models. In an empirical application involving U.S. inflation we find that these moving average stochastic volatility models provide better insample fitness and out-of-sample forecast performance than the standard variants with only stochastic volatility.
    Keywords: state space, unobserved components model, precision, sparse, density forecast.
    JEL: C11 C51 C53
    Date: 2013–05
  5. By: Quiroz, Matias (Research Department, Central Bank of Sweden); Villani, Mattias (Linköping University)
    Abstract: We propose a general class of flexible models for longitudinal data with special emphasis on discrete-time survival data. The model is a finite mixture model where the subjects are allowed to move between components through time. The time-varying probability of component memberships is modeled as a function of subject-specific time-varying covariates. This allows for interesting within-subject dynamics and manageable computations even with a large number of subjects. Each parameter in the component densities and in the mixing function is connected to its own set of covariates through a link function. The models are estimated using a Bayesian approach via a highly efficient Markov Chain Monte Carlo (MCMC) algorithm with tailored proposals and variable selection in all set of covariates. The focus of the paper is on models for discrete-time survival data with an application to bankruptcy prediction for Swedish firms, using both exponential and Weibull mixture components. The dynamic mixture-of-experts models are shown to have an interesting interpretation and to dramatically improve the out-of-sample predictive density forecasts compared to models with time-invariant mixture probabilities.
    Keywords: Bayesian inference; Markov Chain Monte Carlo; Bayesian variable selection; Survival Analysis; Mixture-of-experts
    JEL: C11 C41 D21 G33
    Date: 2013–05–01
  6. By: Xibin Zhang; Maxwell L. King; Han Lin Shang
    Abstract: We propose a sampling approach to bandwidth estimation for a nonparametric regression model with continuous and discrete types of regressors and unknown error density. The unknown error density is approximated by a location-mixture of Gaussian densities with means being the individual errors, and variance a constant parameter. This error density has a form of a kernel density estimator of errors with its bandwidth being the common standard deviation. We derive an approximate likelihood and posterior for bandwidth parameters, and a sampling algorithm is also developed. Monte Carlo simulation studies show that the proposed Bayesian sampling approach leads to better accuracy of the resulting estimators, especially the error density estimator, than the cross-validation. We apply the proposed sampling method to bandwidth estimation for a nonparametric regression model of the Australian All Ordinaries (Aord) daily returns on the overnight S&P 500 return and an indicator of the FTSE return. With the estimated bandwidths, we obtain the one-day-ahead density forecast of the Aord return and a distribution-free measure of value-at-risk. We also use the proposed sampling method to estimate bandwidths for the kernel estimator of the joint density of GDP growth rate, its year level and OECD status.
    Keywords: cross-validation, exceedance, Nadaraya-Watson estimator, random-walk Metropolis algorithm, unknown error density, value-at-risk
    Date: 2013
  7. By: Qian, Hang
    Abstract: Three new approaches are proposed to handle mixed frequency Vector Autoregression. The first is an explicit solution to the likelihood and posterior distribution. The second is a parsimonious, time-invariant and invertible state space form. The third is a parallel Gibbs sampler without forward filtering and backward sampling. The three methods are unified since all of them explore the fact that the mixed frequency observations impose linear constraints on the distribution of high frequency latent variables. By a simulation study, different approaches are compared and the parallel Gibbs sampler outperforms others. A financial application on the yield curve forecast is conducted using mixed frequency macro-finance data.
    Keywords: VAR, Temporal aggregation, State space, Parallel Gibbs sampler
    JEL: C11 C32 C82
    Date: 2013–06
  8. By: Neil Shephard (Nuffield College, Oxford and Economics Department, University of Oxford)
    Abstract: Background: There has been extensive discussion of the workings of the English system of higher education income contingent student loans. Major focuses have been on what former students are likely to pay and when, distributional characteristics and how much the Government guarantees made to students about having their loans forgiven after 30 years are likely to cost the budget of the Department of Business, Innovations and Skills (BIS) in the longer term. Leading contributions to this work includes Barr(2004), Goodman et al (2008), BIS Ready Reckoner (2012) and Chowdry et al (2012). Here we look at a vital but entirely unstudied area, the actual cost the Government faces in financing these loans through borrowing in the gilts market1. We use a financially conventional “liabilities matching” approach, just as we would if we were trying to match or value pension obligations. To do this we identify financial instruments which proxy the behaviour of the time series of former student expected repayments. This allows us to estimate each year the direct cost to the state of providing these student loans. The results are remarkably different from the conventional calculations used by H. M. Treasury to charge BIS in their Departmental account2. The reason for this is very simple to explain, it relies entirely on the next observation. Once that is accepted all of the other arguments are conventional economics and the conclusions follow immediately.
    Date: 2013–05–08
  9. By: Michael P. Keane (University of Oxford, Nuffield College); Nada Wasi (University of Michigan, Survey Research Center)
    Abstract: In recent years it has become common to use stated preference (SP) discrete choice experiments (DCEs) to study and/or predict consumer demand. SP is particularly useful when revealed preference (RP) data is unobtainable or uninformative (e.g., to predict demand for a new product with an attribute not present in existing products, to value non-traded goods). The increasing use of SP data has led to a growing body of research that compares SP vs. RP demand predictions (in contexts when both are available). The present paper goes further by comparing the structure of consumer taste heterogeneity in SP vs. RP data. Our results suggest the nature of taste heterogeneity is very different: In SP data consumers are much more likely to exhibit either (i) lexicographic preferences, or (ii) “random” choice behavior. And many consumers appear to be fairly insensitive to price. This suggests that caution should be applied before using SP to answer questions about the distribution of taste heterogeneity in actual markets.
    Keywords: Discrete choice experiments, Stated preference data, Discrete choice models, Consumer demand, Consumer heterogeneity, Mixture models
    JEL: D12 C35 C33 C91 M31
    Date: 2013–02–04
  10. By: Peter Edward; Andy Sumner
    Abstract: Smart decisions about where to focus poverty alleviation projects depend on accurate projections of where the bulk of the world’s poor will be living in 10, 20 years or more. So far, though, the picture has been murky. Data limitations and an abundance of modeling strategies complicate forecasts and contribute to wide discrepancies. In this working paper, Peter Edward and Andy Sumner introduce new model of growth, inequality, and poverty that allows comparison of a wide range of input assumptions. They find that it is plausible that $1.25 and $2 global poverty will reduce substantially by 2030 and the former – $1.25 poverty – could be very low by that time. However, this depends a lot on economic growth and inequality trends—up to almost an extra billion $2 poor people in one scenario. Where the world’s poor will reside also depends on inequality trends and the methods used to estimate poverty. The authors find little compelling evidence that global poverty will be based in low-income fragile state, and they suggest policymakers take into account the greater variety of country categories that will be home to the future of global poverty.
    Keywords: poverty, inequality, projections, methodology
    JEL: I32 D63
    Date: 2013–06
  11. By: Bev Dahlby (University of Calgary); Jonathan Rodden (Stanford University)
    Abstract: We develop a political economy model of intergovernmental transfers. Vertical fiscal balance occurs in a federation when the ratio of the marginal benefit of the public services provided by the federal and provincial governments is equal to their relative marginal costs of production. With majority voting in national elections, the residents of a "pivotal province" will determine the level of transfers such that the residents of that province achieve a vertical fiscal balance in spending by the two levels of government. We test the predictions of the model using Canadian time series data and cross-section data for nine federations.
    Keywords: Fiscal federalism, vertical fiscal imbalance, fiscal gap
    JEL: H71 H73 H77
    Date: 2013

This nep-for issue is ©2013 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.