nep-for New Economics Papers
on Forecasting
Issue of 2014‒07‒21
ten papers chosen by
Rob J Hyndman
Monash University

  1. Forecast Rationality Tests in the Presence of Instabilities, With Applications to Federal Reserve and Survey Forecasts By Barbara Rossi; Tatevik Sekhposyany
  2. Forecasting electricity spot prices using time-series models with a double temporal segmentation By Fouquau, Julien; Bessec, Marie; Méritet, Sophie
  3. Meeting our D€STINY. A Disaggregated €uro area Short Term INdicator model to forecast GDP (Y) growth By Pablo Burriel; María Isabel García-Belmonte
  4. Forecasting future oil production in Norway and the UK: a general improved methodology By Lucas Fievet; Zal\`an Forr\`o; Peter Cauwels; Didier Sornette
  5. Adaptive Models and Heavy Tails By Davide Delle Monache; Ivan Petrella
  6. Exploiting the monthly data-flow in structural forecasting By Domenico Giannone; Francesca Monti; Lucrezia Reichlin
  7. Estimating and Forecasting the Yield Curve Using a Markov Switching Dynamic Nelson and Siegel Model By Constantino Hevia; Martin Gonzalez-Rozada; Martin Sola; Fabio Spagnolo
  8. The effects of scale, space and time on the predictive accuracy of land use models By Jean-Sauveur Ay; Raja Chakir; Julie Le Gallo
  9. Testing the Predictability of Consumption Growth: Evidence from China By Liping Gao; Hyeongwoo Kim
  10. Modeling Portfolio Risk by Risk Discriminatory Trees and Random Forests By Yang, Bill Huajian

  1. By: Barbara Rossi; Tatevik Sekhposyany
    Abstract: This paper proposes a framework to implement regression-based tests of predictive ability in unstable environments, including, in particular, forecast unbiasedness and efficiency tests, commonly referred to as tests of forecast rationality. Our framework is general: it can be applied to model-based forecasts obtained either with recursive or rolling window estimation schemes, as well as to forecasts that are model-free. The proposed tests provide more evidence against forecast rationality than previously found in the Federal Reserve's Greenbook forecasts as well as survey-based private forecasts. It confirms, however, that the Federal Reserve has additional information about current and future states of the economy relative to market participants.
    Keywords: forecasting, forecast rationality, regression-based tests of forecasting ability, Greenbook forecasts, survey forecasts, real-time data
    JEL: C22 C52 C53
    Date: 2014–06
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:765&r=for
  2. By: Fouquau, Julien; Bessec, Marie; Méritet, Sophie
    Abstract: The French wholesale market is set to expand in the next few years under European pressure and national decisions. In this paper, we assess the forecasting ability of several classes of time series models for electricity wholesale spot prices at a day-ahead horizon in France. Electricity spot prices display a strong seasonal pattern, particularly in France given the high share of electric heating in housing during winter time. To deal with this pattern, we implement a double temporal segmentation of the data. For each trading period and season, we use a large number of specifications based on market fundamentals: linear regressions, markov-switching models, threshold models with a smooth transition. Non-linear models designed to capture the sudden and fast-reverting spikes in the price dynamics yield more accurate forecasts. Modeling each season independently also leads to better results. Finally, pooling forecasts gives more reliable results. Individual models are generally superior but their performance is more unstable across hours and seasons.
    Keywords: Electricity spot prices; forecasting; regime-switching;
    JEL: C22 C24 Q43
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:dau:papers:123456789/13532&r=for
  3. By: Pablo Burriel (Banco de España); María Isabel García-Belmonte (Banco de España)
    Abstract: In this paper we propose a new real-time forecasting model for euro area GDP growth, D€STINY, which attempts to bridge the existing gap in the literature between large- and small-scale dynamic factor models. By adopting a disaggregated modelling approach, D€STINY uses most of the information available for the euro area and the member countries (around 100 economic indicators), but without incurring in the nite sample problems of the large-scale methods, since all the estimated models are of a small scale. An empirical pseudo-real time application for the period 2004-2013 shows that D€STINY´s forecasting performance is clearly better than the standard alternative models and than the publicly available forecasts of other institutions. This is especially true for the period since the beginning of the crisis, which suggests that our approach may be more robust to periods of highly volatile data and to the possible presence of structural breaks in the sample.
    Keywords: business cycles, output growth, time series, Euro-STING model, large-scale model
    JEL: E32 C22 E27
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:1323&r=for
  4. By: Lucas Fievet; Zal\`an Forr\`o; Peter Cauwels; Didier Sornette
    Abstract: We present a new Monte-Carlo methodology to forecast the crude oil production of Norway and the U.K. based on a two-step process, (i) the nonlinear extrapolation of the current/past performances of individual oil fields and (ii) a stochastic model of the frequency of future oil field discoveries. Compared with the standard methodology that tends to underestimate remaining oil reserves, our method gives a better description of future oil production, as validated by our back-tests starting in 2008. Specifically, we predict remaining reserves extractable until 2030 to be 188 +/- 10 million barrels for Norway and 98 +/- 10 million barrels for the UK, which are respectively 45% and 66% above the predictions using the standard methodology.
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1407.3652&r=for
  5. By: Davide Delle Monache (Queen Mary, University of London); Ivan Petrella (Department of Economics, Mathematics & Statistics, Birkbeck)
    Abstract: This paper proposes a novel and ‡exible framework to estimate autoregressive models with time-varying parameters. Our setup nests various adaptive algorithms that are commonly used in the macroeconometric literature, such as learning-expectations and forgetting-factor algorithms. These are generalized along several directions: specifically, we allow for both Student-t distributed innovations as well as time-varying volatility. Meaningful restrictions are imposed to the model parameters, so as to attain local stationarity and bounded mean values. The model is applied to the analysis of inflation dynamics. Allowing for heavy-tails leads to a significant improvement in terms of fit and forecast. Moreover, it proves to be crucial in order to obtain well-calibrated density forecasts.
    Keywords: Time-Varying Parameters, Score-driven Models, Heavy-Tails, Adaptive Algorithms, Inflation.
    JEL: C22 C51 C53 E31
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:bbk:bbkefp:1409&r=for
  6. By: Domenico Giannone (Libera Università Internazionale degli Studi Sociali Guido Carli (LUISS); Centre for Economic Policy Research (CEPR)); Francesca Monti (Bank of England; Centre for Macroeconomics (CFM)); Lucrezia Reichlin (London Business School (LBS); Centre for Economic Policy Research (CEPR))
    Abstract: This paper shows how and when it is possible to obtain a mapping from a quarterly DSGE model to a monthly specification that maintains the same economic restrictions and has real coefficients. We use this technique to derive the monthly counterpart of the Gali et al (2011) model. We then augment it with auxiliary macro indicators which, because of their timeliness, can be used to obtain a now-cast of the structural model. We show empirical results for the quarterly growth rate of GDP, the monthly unemployment rate and the welfare relevant output gap defined in Gali, Smets and Wouters (2011). Results show that the augmented monthly model does best for now-casting.
    Keywords: DSGEmodels, forecasting, temporal aggregation, mixed frequency data, large datasets
    JEL: C33 C53 E30
    Date: 2014–06
    URL: http://d.repec.org/n?u=RePEc:cfm:wpaper:1416&r=for
  7. By: Constantino Hevia (World Bank; Universidad Torcuato Di Tella); Martin Gonzalez-Rozada (Universidad Torcuato Di Tella); Martin Sola (Birkbeck, University of London; Universidad Torcuato Di Tella); Fabio Spagnolo (Brunel University)
    Abstract: We estimate versions of the Nelson-Siegel model of the yield curve of U.S. government bonds using a Markov switching latent variable model that allows for discrete changes in the stochastic process followed by the interest rates. Our modelling approach is motivated by evidence suggesting the existence of breaks in the behaviour of the U.S. yield curve that depend, for example, on whether the economy is in a recession or a boom, or on the stance of monetary policy. Our model is parsimonious, relatively easy to estimate, and flexible enough to match the changing shapes of the yield curve over time. We also derive the discrete time non-arbitrage restrictions for the Markov switching model. We compare the forecasting performance of these models with that of the standard dynamic Nelson and Siegel model and an extension that allows the decay rate parameter to be time-varying. We show that some parameterizations of our model with regime shifts outperform the single regime Nelson and Siegel model and other standard empirical models of the yield curve.
    Keywords: Yield Curve, Term structure of interest rates, Markov regime switching, Maximum likelihood, Risk premium.
    JEL: C13 C22 E43
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:bbk:bbkcam:1403&r=for
  8. By: Jean-Sauveur Ay; Raja Chakir; Julie Le Gallo
    Abstract: The econometric literature about modeling land use choices is highly heterogeneous with respect to the scale of the data, and to the structure of the models in terms of the effects of space and time. This paper proposes a joint evaluation of each of these three elements by estimating a broad spectrum of individual and aggregate, spatial and aspatial, short and long run econometric models on the same detailed French dataset. Considering four land use classes (arable crops, pasture, forest, and urban), all the models are compared in terms of both in- and out-of-sample predictive accuracy. We argue that the aggregate scale allows to model more effectively the effect of space by using spatial econometric models. We show that modeling spatial autocorrelation allow to have very accurate predictions which can even outperform individual models when the appropriate predictors are used. We also found some strong interactions between the effects of scale, space and time which can be of major interest for applied researchers.
    Keywords: Land use models, spatial econometrics, predictive accuracy, aggregate and individual data
    JEL: Q15 Q24 R1 C21
    Date: 2014–07–04
    URL: http://d.repec.org/n?u=RePEc:apu:wpaper:2014/02&r=for
  9. By: Liping Gao; Hyeongwoo Kim
    Abstract: Chow (1985, 2010, 2011) reports indirect evidence for the permanent income hypothesis using time series observations in China. We revisit this issue by addressing direct evidence of the predictability of consumption growth in China during the post-economic reform regime (1978-2009) as well as the postwar US data for comparison. Our in-sample analysis provides strong evidence against the PIH for both countries. Out-of-sample forecast exercises show that consumption changes are highly predictable.
    Keywords: Permanent Income Hypothesis; Consumption; Out-of-Sample Predictability; Diebold-Mariano-West Statistic
    JEL: E21 E27
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:abn:wpaper:auwp2014-11&r=for
  10. By: Yang, Bill Huajian
    Abstract: Common tree splitting strategies involve minimizing a criterion function for minimum impurity (i.e. difference) within child nodes. In this paper, we propose an approach based on maximizing a discriminatory criterion for maximum risk difference between child nodes. Maximum discriminatory separation based on risk is expected in credit risk scoring and rating. The search algorithm for an optimal split, proposed in this paper, is efficient and simple, just a scan through the dataset. Choices of different trees, with options either more or less aggressive in variable splitting, are made possible. Two special cases are shown to relate to the Kolmogorov Smirnov (KS) and the intra-cluster correlation (ICC) statistics. As a validation of the proposed approaches, we estimate the exposure at default for a commercial portfolio. Results show, the risk discriminatory trees, constructed and selected using the bagging and random forest, are robust. It is expected that the tools presented in this paper will add value to general portfolio risk modelling.
    Keywords: Exposure at default, probability of default, loss given default, discriminatory tree, CART tree, random forest, bagging,, KS statistic, intra-cluster correlation, penalty function, risk concordance
    JEL: C1 C14 G32 G38
    Date: 2013–08–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:57245&r=for

This nep-for issue is ©2014 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.