nep-for New Economics Papers
on Forecasting
Issue of 2007‒04‒21
twelve papers chosen by
Rob J Hyndman
Monash University

  1. An Embarrassment of Riches: Forecasting Using Large Panels By Eklund, Jana; Karlsson, Sune
  2. Forecasting crude oil and natural gas spot prices by classification methods By Viviana Fernández
  3. FORECAST CONTENT AND CONTENT HORIZONS FOR SOME IMPORTANT MACROECONOMIC TIME SERIES By John W. Galbraith (and TKACZ, Greg); Greg Tkacz
  4. OPEN ECONOMY DSGE-VAR FORECASTING AND POLICY ANALYSIS: HEAD TO HEAD WITH THE RBNZ PUBLISHED FORECASTS By Kirdan Lees; Troy Matheson; Christie Smith
  5. Exact prediction of inflation in the USA By Ivan, Kitov
  6. Inflation as a function of labor force change rate: cointegration test for the USA By Kitov, Ivan; Kitov, Oleg; Dolinskaya, Svetlana
  7. Relationship between inflation, unemployment and labor force change rate in France: cointegration test By Kitov, Ivan; Kitov, Oleg; Dolinskaya, Svetlana
  8. Learning and the Great Inflation By Carboni, Giacomo; Ellison, Martin
  9. Forecasting with estimated dynamic stochastic general equilibrium models: The role of nonlinearities By Paul Pichler
  10. The relationship between ARIMA-GARCH and unobserved component models with GARCH disturbances By Santiago Pellegrini; Esther Ruiz; Antoni Espasa
  11. Prognosen zur Ost-West-Wanderung nach der deutschen Wiedervereinigung By Sascha Wolff
  12. Will the Skill-Premium in the Netherlands Rise in the Next Decades? By Arnaud Dupuy

  1. By: Eklund, Jana (Bank of England); Karlsson, Sune (Department of Business, Economics, Statistics and Informatics)
    Abstract: The increasing availability of data and potential predictor variables poses new challenges to forecasters. The task of formulating a single forecasting model that can extract all the relevant information is becoming increasingly difficult in the face of this abundance of data. The two leading approaches to addressing this "embarrassment of riches" are philosophically distinct. One approach builds forecast models based on summaries of the predictor variables, such as principal components, and the second approach is analogous to forecast combination, where the forecasts from a multitude of possible models are averaged. Using several data sets we compare the performance of the two approaches in the guise of the diffusion index or factor models popularized by Stock and Watson and forecast combination as an application of Bayesian model averaging. We find that none of the methods is uniformly superior and that no method performs better than, or is outperformed by, a simple AR(p) process.
    Keywords: Bayesian model averaging; Diffusion indexes; GDP growth rate; Inflation rate
    JEL: C11 C51 C52 C53
    Date: 2007–03–31
    URL: http://d.repec.org/n?u=RePEc:hhs:oruesi:2007_001&r=for
  2. By: Viviana Fernández
    Abstract: In this article, we forecast crude oil and natural gas spot prices at a daily frequency based on two classification techniques: artificial neural networks (ANN) and support vector machines (SVM). As a benchmark, we utilize an autoregressive integrated moving average (ARIMA) specification. We evaluate out-of-sample forecast based on encompassing tests and mean-squared prediction error (MSPE). We find that at short-time horizons (e.g., 2-4 days), ARIMA tends to outperform both ANN and SVM. However, at longer-time horizons (e.g., 10-20 days), we find that in general ARIMA is encompassed by these two methods, and linear combinations of ANN and SVM forecasts are more accurate than the corresponding individual forecasts. Based on MSPE calculations, we reach similar conclusions: the two classification methods under consideration outperform ARIMA at longer time horizons.
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:edj:ceauch:229&r=for
  3. By: John W. Galbraith (and TKACZ, Greg); Greg Tkacz
    Abstract: For quantities that are approximately stationary, the information content of statistical forecasts tends to decline as the forecast horizon increases, and there exists a maximum horizon beyond which forecasts cannot provide discernibly more information about the variable than is present in the unconditional mean (the content horizon). The pattern of decay of forecast content (or skill) with increasing horizon is well known for many types of meteorological forecasts; by contrast, little generally-accepted information about these patterns or content horizons is available for economic variables. In this paper we attempt to develop more information of this type by estimating content horizons for variety of macroeconomic quantities; more generally, we characterize the pattern of decay of forecast content as we project farther into the future. We find wide variety of results for the different macroeconomic quantities, with models for some quantities providing useful content several years into the future, for other quantities providing negligible content beyond one or two months or quarters.
    JEL: C53 E17
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:mcl:mclwop:2007-01&r=for
  4. By: Kirdan Lees; Troy Matheson; Christie Smith
    Abstract: We evaluate the performance of an open economy DSGE-VAR model for New Zealand along both forecasting and policy dimensions. We show that forecasts froma DSGE-VAR and a "vanilla" DSGE model are competitive with, and in some dimensions superrior to, the Reserve Bank of New Zealand's official forecasts. We also use the estimated DSGE-VAR structure to identify optimal policy rules that are consistent with the Reserve bank's Policy Targets Agreement. Optimal policy rules under parameter certainty prove to be relatively similar to the certainty case. The optimal policies react aggressively to inflation and contain a large degree of interest rate smoothing, but place a low weight on responding to output or the change in the nominal exchange rate.
    JEL: C51 E52 F41
    Date: 2007–01
    URL: http://d.repec.org/n?u=RePEc:acb:camaaa:2007-05&r=for
  5. By: Ivan, Kitov
    Abstract: A linear and lagged relationship between inflation and labor force growth rate has been recently found for the USA. It accurately describes the period after the late 1950s with linear coefficient 4.0, intercept -0.03, and the lag of 2 years. The previously reported agreement between observed and predicted inflation is substantially improved by some simple measures removing the most obvious errors in the labor force time series. The labor force readings originally obtained from the Bureau of Labor Statistics (BLS) website are corrected for step-like adjustments. Additionally, a half-year time shift between the inflation and the annual labor force readings is compensated. GDP deflator represents the inflation. Linear regression analysis demonstrates that the annual labor force growth rate used as a predictor explains almost 82% (R2=0.82) of the inflation variations between 1965 and 2002. Moving average technique applied to the annual time series results in a substantial increase in R2. It grows from 0.87 for two-year wide windows to 0.96 for four-year windows. Regression of cumulative curves is characterized by R2>0.999. This allows effective replacement of GDP deflation index by a “labor force growth” index. The linear and lagged relationship provides a precise forecast at the two-year horizon with root mean square forecasting error (RMSFE) as low as 0.008 (0.8%) for the entire period between 1965 and 2002. For the last 20 years, RMSFE is only 0.4%. Thus, the forecast methodology effectively outperforms any other forecasting technique reported in economic and financial literature. Moreover, further significant improvements in the forecasting accuracy are accessible through improvements in the labor force measurements in line with the US Census Bureau population estimates, which are neglected by BLS.
    Keywords: inflation; labor force; forecast; the USA
    JEL: E61 E31 J21
    Date: 2006–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2735&r=for
  6. By: Kitov, Ivan; Kitov, Oleg; Dolinskaya, Svetlana
    Abstract: Previously, a linear and lagged relationship between inflation and labor force change rate, π(t)= A1dLF(t-t1)/LF(t-t1)+A2 (where A1 and A2 are empirical country-specific coefficients), was found for developed economies. The relationship obtained for the USA is characterized by A1=4.0, A2=-0.03075, and t1=2 years. It provides a root mean square forecasting error (RMFSE) of 0.8% at a two-year horizon for the period between 1965 and 2002 (the best among other inflation forecasting models) and has a perfect parsimony - only one predictor. The relationship is tested for cointegration. Both variables are integrated of order one according to the presence of a unit root in the series and its absence in their first differences. Two methods of cointegration testing are applied - the Engle-Granger one based on the unit root test of the residuals including a variety of specification tests and the Johansen cointegration rank test based on the VAR representation. Both approaches demonstrate that the variables are cointegrated and the long-run equilibrium relation revealed in previous study holds. According to the Granger causality test, the labor force change is proved to be a weakly exogenous variable - a natural result considering the time lead and the existence of a cointegrating relation. VAR and VECM representations do not provide any significant improvement in RMSFE. There are numerous applications of the equation: from purely theoretical - a robust fundamental relation between macroeconomic and population variables, to a practical one - an accurate out-of-sample inflation forecasting at a two-year horizon and a long-term prediction based on labor force projections. The predictive power of the relationship is inversely proportional to the uncertainty of labor force estimates. Therefore, future inflation research programs should start from a significant improvement in the accuracy of labor force estimations
    Keywords: cointegration; inflation; labor force; forecasting; USA; VAR; VECM
    JEL: C32 E31 E32
    Date: 2007–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2734&r=for
  7. By: Kitov, Ivan; Kitov, Oleg; Dolinskaya, Svetlana
    Abstract: A linear and lagged relationship between inflation, unemployment and labor force change rate, π(t)=A0UE(t-t0)+A1dLF(t-t1)/LF(t-t1)+A2 (where A0, A1, and A2 are empirical country-specific coefficients), was found for developed economies. The relationship obtained for France is characterized by A0=-1, A1=4, A2=0.095, t0=4 years, and t1=4 years. For GDP deflator, it provides a root mean square forecasting error (RMFSE) of 1.0% at a four-year horizon for the period between 1971 and 2004. The relationship is tested for cointegration. All three variables involved in the relationship are proved to be integrated of order one. Two methods of cointegration testing are used. First is the Engle-Granger approach based on the unit root test in the residuals of linear regression, which also includes a number of specification tests. Second method is the Johansen cointegration rank test based on a VAR representation, which is also proved to be an adequate one via a set of appropriate tests. Both approaches demonstrate that the variables are cointegrated and the long-run equilibrium relation revealed in previous study holds together with statistical estimates of goodness-of-fit and RMSFE. Relationships between inflation and labor force and between unemployment and labor force are tested separately in appropriate time intervals, where the Banque de France monetary policy introduced in 1995 does not disturb the long-term links. All the individual relationships are cointegrated in corresponding intervals. The VAR and vector error correction (VEC) models are estimated and provide just a marginal improvement in RMSFE at the four-year horizon both for GDP deflator (down to 0.9%) and CPI (~1.1%) on the results obtained in the regression study. The VECM approach also allows re-estimation of the coefficients in the individual and generalized relationship between the variables both for cointegration rank 1 and 2. Comparison of the standard cointegration approach to the integral approach to the estimation of the coefficients in the individual and generalized relationships between the studied variables demonstrates the superiority of the latter. The cumulative inflation curve or inflation index, which is the actually measured evolution of price level, is much better predicted in the framework of the integral approach, which is a powerful tool for revealing true relationships between non-stationary variables and can be potentially used for rejection of spurious regression. The cumulative curves allow avoiding obvious drawbacks of the VECM representation and cointegration tests – increasing signal to noise ratio after differentiation and severe dependence on statistical properties of error terms. The confirmed validity of the linear lagged relationship between inflation, unemployment and labor force change indicates that since 1995 the Banque de France has been wrongly applying the policy fixing the monetary growth to the reference value around 4.5%. As a result of the policy, during the last ten years unemployment in France was twice as large as the one dictated by its long-term equilibrium link to labor force change. This increased unemployment compensates the forced price stability.
    Keywords: cointegration; inflation; unemployment; labor force; forecasting; France; VAR; VECM
    JEL: C32 E37 E31 J21 E32
    Date: 2007–02
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2736&r=for
  8. By: Carboni, Giacomo; Ellison, Martin
    Abstract: We respond to the challenge of explaining the Great Inflation by building a coherent framework in which both learning and uncertainty play a central role. At the heart of our story is a Federal Reserve that learns and then disregards the Phillips curve as in Sargent's Conquest of American Inflation, but at all times takes into account that its view of the world is subject to considerable uncertainties. Allowing Federal Reserve policy to react to these perceived uncertainties improves our ability to explain the Great Inflation with a learning model. Bayesian MCMC estimation results are encouraging and favour a model where policy reacts to uncertainty over a model where uncertainty is ignored. The posterior likelihood is higher and the internal Federal Reserve forecasts implied by the model are closer to those reported in the Greenbook.
    Keywords: Great Inflation; learning; monetary policy; uncertainty
    JEL: E52 E58 E65
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:6250&r=for
  9. By: Paul Pichler
    Abstract: We show that redistributive tax and transfer systems have a distortionary e®ect and an insurance e®ect, if agents face idiosyncratic uninsurable earnings risk. These two e®ects imply that redistributive taxes decrease both mean consumption and the standard deviation of consumption. Using household data, we construct an `income compression' measure of the redistributiveness of the tax system and empirically test for the presence of these two e®ects by exploiting di®erences in US state taxes. We ¯nd that tax redistributiveness explains much of the variation in the mean and standard deviation of the within-state consumption distributions over the US. This provides evidence for the presence of both distortionary and insurance e®ects of redistributive taxes and transfers.
    JEL: C68 E47 E52
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:vie:viennp:0702&r=for
  10. By: Santiago Pellegrini; Esther Ruiz; Antoni Espasa
    Abstract: The objective of this paper is to analyze the consequences of fitting ARIMA-GARCH models to series generated by conditionally heteroscedastic unobserved component models. Focusing on the local level model, we show that the heteroscedasticity is weaker in the ARIMA than in the local level disturbances. In certain cases, the IMA(1,1) model could even be wrongly seen as homoscedastic. Next, with regard to forecasting performance, we show that the prediction intervals based on the ARIMA model can be inappropriate as they incorporate the unit root while the intervals of the local level model can converge to the homoscedastic intervals when the heteroscedasticity appears only in the transitory noise. All the analytical results are illustrated with simulated and real time series.
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws072706&r=for
  11. By: Sascha Wolff
    Abstract: Mit dem Fall der Berliner Mauer im November 1989 und dem sich anschließenden Wiedervereinigungsprozess zwischen dem ehemals zentralwirtschaftlich geprägten Osten und dem marktwirtschaftlich orientierten westlichen Teil Deutschlands traten Befürchtungen über Massenabwanderungen vom Osten in den Westen recht schnell zutage. Die Angst vor einer Überschwemmung des westdeutschen Arbeitsmarktes mit ‚billigen’ Arbeitskräften aus dem östlichen Landesteil kursierte damals nicht nur in den Köpfen vieler Partei- und Gewerkschaftsfunktionäre. In diesem Zusammenhang sind eine ganze Reihe von Studien entstanden, die versuchen, das ost-westdeutsche Wanderungsvolumen zu prognostizieren. Die vorliegende Arbeit will deshalb einen Überblick über diese Untersuchungen und den aktuellen Stand der ökonomischen Forschung auf diesem Gebiet geben sowie die vorgestellten Studien kritisch analysieren, u.a. auch im Hinblick auf ihre ex post Prognosequalität. Dabei lassen sich vornehmlich zwei Kategorien von Wanderungsprognosen unterscheiden. Einerseits existieren Wanderungsprognosen auf der Grundlage von autonomen Simulations- bzw. Erklärungsmodellen. Es gibt jedoch nur sehr wenige dieser Studien, die zudem aus den frühen neunziger Jahren stammen und damit nicht sehr aktuell sind. Andererseits existieren Wanderungsprognosen oder besser Wanderungsannahmen, welche im Rahmen von Bevölkerungsvorausschätzungen, und dabei vielfach mittels Trendfortschreibungen, erstellt wurden. Vor dem Hintergrund sich verfestigender wirtschaftlicher Ungleichgewichte zwischen Ostund Westdeutschland sowie der Durchführung einer eigenen Prognose, welche auf der Basis eines autonomen Erklärungsmodells zukünftig erstellt werden soll, erscheint eine solche Literaturkritik durchaus notwendig und sinnvoll. The fall of the Berlin Wall in November 1989 and the subsequent reunification process between the former centrally planned GDR and the market-oriented Federal Republic of Germany, rapidly raised fears about mass migration movements from the eastern to the western part. The anxiety of an inundation of the western German labour market with cheap labour from the east weighed heavily on the heads of many party and labour union officials. In this context many studies were developed that try to predict the east-west German migration volume. Thus it is the objective of this paper to give an overview over the current state of economic research on migration forecasts in the east-west German context. At this juncture the main studies within this area will be described and critically discussed particularly with regard to their ex post forecast quality. Principally two types of analyses can be distinguished. On the one hand there are migration forecasts which are based on autonomous simulation or explanatory models. However, there are only a small number of these studies. Moreover most of these were developed at the beginning of the 1990s and are thus not very up to date. On the other hand there are migration predictions or better migration assumptions in the east-west German context, which have been developed within the scope of population forecasts and are in many cases based on the method of trend extrapolation. Such a critical review of existing east-west German migration forecasts seems to be requisite and meaningful against the backdrop of solidified economic disparities between east and west Germany. It will also provide a foundation for the development of a new forecast based on a more updated explanatory model.
    Keywords: migration forecasts, east-west German migration, Wanderungsprognosen, ost-westdeutsche Migration
    JEL: J11 J61 J62 R23
    Date: 2007–04–17
    URL: http://d.repec.org/n?u=RePEc:got:vwldps:132&r=for
  12. By: Arnaud Dupuy (ROA, Maastricht University and IZA)
    Abstract: While the skill-premium has been rising sharply in the US and the UK for 20 years, the Dutch skill-premium decreased for much of that period and only started to rise in the early 90s. In this paper, we investigate whether the Dutch skill-premium will rise in the next decades. To answer this question, we forecast the skill-premium using the Katz and Murphy (1992) and the Krusell et al. (2000) models. The Katz and Murphy model (KM) explains demand shifts by skill-biased technological change in unobservable variables captured by a time trend. In contrast, the Krusell et al. model (KORV) explains demand shifts by (observable) changes in the capital stock under a capital-skill complementarity technology. The results show that while the KM model predicts that the skill-premium will have increased by 30% in 2020, based on realistic predictions of the stock of capital, the KORV model predicts that the skillpremium will remain between -5% and +5% of its 1996 level.
    Keywords: skill-premium, skill-biased technical change, capital-skill complementarity
    JEL: D33 J11 J38
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp2708&r=for

This nep-for issue is ©2007 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.