nep-for New Economics Papers
on Forecasting
Issue of 2011‒03‒12
eight papers chosen by
Rob J Hyndman
Monash University

  1. Forecasting Spikes in Electricity Prices By Tim Christensen; Stan Hurn; Ken Lindsay
  2. On Not Evaluating Economic Models by Forecast Outcomes By Jennifer L. Castle; David F. Hendry
  3. Forecasting population changes and service requirements in the regions: a study of two regional councils in Queensland, Australia By Wasantha Athukorala; Prasad Neelawela; Clevo Wilson; Evonne Miller; Tony Sahama; Peter Grace; Mike Hefferan; Premawansa Dissanayake; Oshan Manawadu
  4. The Financial Crisis from a Forecaster’s Perspective By Katja Drechsel; Rolf Scheufele
  5. Measuring Uncertainty and Disagreement in the European Survey and Professional Forecasters By Cristina Conflitti
  6. Risk Management of Risk under the Basel Accord: Forecasting Value-at-Risk of VIX Futures By Michael McAleer; Juan-Ángel Jiménez-Martín; Chia-Lin Chang; Teodosio Pérez-Amaral
  7. Inflation and unemployment in Switzerland: from 1970 to 2050 By Oleg Kitov; Ivan Kitov
  8. Measuring disagreement in UK consumer and central bank inflation forecasts By Richhild Moessner; Feng Zhu; Colin Ellis

  1. By: Tim Christensen (Yale University); Stan Hurn (QUT); Ken Lindsay (Glasgow)
    Abstract: In many electricity markets, retailers purchase electricity at an unregulated spot price and sell to consumers at a heavily regulated price. Consequently the occurrence of extreme movements in the spot price represents a major source of risk to retailers and the accurate forecasting of these extreme events or price spikes is an important aspect of effective risk management. Traditional approaches to modeling electricity prices are aimed primarily at predicting the trajectory of spot prices. By contrast, this paper focuses exclusively on the prediction of spikes in electricity prices. The time series of price spikes is treated as a realization of a discrete-time point process and a nonlinear variant of the autoregressive conditional hazard (ACH) model is used to model this process. The model is estimated using half-hourly data from the Australian electricity market for the sample period 1 March 2001 to 30 June 2007. The estimated model is then used to provide one-step-ahead forecasts of the probability of an extreme event for every half hour for the forecast period, 1 July 2007 to 30 September 2007, chosen to correspond to the duration of a typical forward contract. The forecasting performance of the model is then evaluated against a benchmark that is consistent with the assumptions of commonly-used electricity pricing models.
    Keywords: Electricity Prices, Price Spikes, Autoregressive Conditional Duration, Autoregressive
    JEL: C14 C52
    Date: 2011–01–25
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2011_1&r=for
  2. By: Jennifer L. Castle; David F. Hendry
    Abstract: Even in scientific disciplines, forecast failures occur. Four possible states of nature (a model is good or bad, and it forecasts well or badly) are examined using a forecast-error taxonomy, which traces the many possible sources of forecast errors. This analysis shows that a valid model can forecast badly, and a poor model can forecast successfully. Delineating the main causes of forecast failure reveals transformations that can correct failure without altering the ‘quality’ of the model in use. We conclude that judging a model by the accuracy of its forecasts is more like fools’ gold than a gold standard.
    Keywords: Model evaluation, forecast failure, model selection
    JEL: C52
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:538&r=for
  3. By: Wasantha Athukorala (QUT); Prasad Neelawela (QUT); Clevo Wilson (QUT); Evonne Miller (QUT); Tony Sahama (QUT); Peter Grace (QUT); Mike Hefferan (University of Sunshine Coast); Premawansa Dissanayake (QUT); Oshan Manawadu (QUT)
    Abstract: Forecasting population growth to meet the service needs of a growing population is a vexed issue. The task of providing essential services becomes even more difficult when future population growth forecasts are unavailable or unreliable. The aim of this paper is to identify the main methods used in population forecasting and thereby select an approach to demonstrate that such forecasting can be undertaken with certainly and transparency, barring exogenous events. We then use the population forecasts to plan for service needs that arise from changes in population in the future. Interestingly, although there are techniques available to forecast such future population changes and much of this forecasting occurs, such work remains somewhat clouded in mystery. We strive to rectify this situation by applying an approach that is verifiable, transparent, and easy to comprehend. For this purpose we select two regional councils in Queensland, Australia. The experience derived from forecasting shows that forecasts for service needs of larger populations are more easily and accurately derived than for smaller populations. Hence, there is some evidence, at least from a service provision point of view, to justify the benefits of council/ municipality amalgamation in recent times in Australia and elsewhere. The methodology used in this paper for population forecasting and the provision of service needs based on such forecasts will be of particular interest to policy decisionmakers and planners.
    Keywords: Regional Population forecasting, service provision, Box-Jenkins model
    JEL: J11 O21 R10 J38
    Date: 2010–12–09
    URL: http://d.repec.org/n?u=RePEc:qut:dpaper:263&r=for
  4. By: Katja Drechsel; Rolf Scheufele
    Abstract: This paper analyses the recession in 2008/2009 in Germany, which is very different from previous recessions, in particular regarding its cause and magnitude. We show to what extent forecasters and forecasts based on leading indicators fail to detect the timing and the magnitude of the recession. This study shows that large forecast errors for both expert forecasts and forecasts based on leading indicators resulted during this recession which implies that the recession was very difficult to forecast. However, some leading indicators (survey data, risk spreads, stock prices) have indicated an economic downturn and hence, beat univariate time series models. Although the combination of individual forecasts provides an improvement compared to the benchmark model, the combined forecasts are worse than several individual models. A comparison of expert forecasts with the best forecasts based on leading indicators shows only minor deviations. Overall, the range for an improvement of expert forecasts during the crisis compared to indicator forecasts is relatively small.
    Keywords: leading indicators, recession, consensus forecast, non-linearities
    JEL: E37 C53
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:iwh:dispap:5-11&r=for
  5. By: Cristina Conflitti
    Abstract: Survey data on expectations and economic forecasts play an important role in providing better insights into how economic agents make their own forecasts, what factors do affect the accuracy of these forecasts and why agents disagree in making them. Uncertainty is also important for better understanding many areas of economic behavior. Several approaches to measure uncertainty and disagreement have been proposed but a lack of direct observations and information on uncertainty and disagreement lead to ambiguous definitions of these two concepts. Using data from the European Survey of Professional Forecasters (SPF), which provide forecast point estimates and probability density forecasts, we consider several measures of uncertainty and disagreement at both aggregate and individual level. We overcome the problem associated with distributional assumptions of probability density forecasts by using an approach that does not assume any functional form for the individual probability densities but just approximating the histogram by a piecewise linear function. We extend earlier works to the European context for the three macroeconomic variables: GDP, inflation and unemployment. Moreover, we analyze how these measures perform with respect to different forecasting horizons. Looking at point estimates and disregarding the individual probability information provides misestimates of disagreement and uncertainty. Comparing the three macroeconomic variables of interest, uncertainty and disagreement are higher for GDP and inflation than unemployment, at short and long horizons. Besides this, it is difficult to find a common behavior between uncertainty and disagreement among the variables: results do not support evidence that, if uncertainty or disagreement are relatively high for one of the variable than it is the same for the others
    Date: 2010–11
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/64795&r=for
  6. By: Michael McAleer (Econometrisch Instituut (Econometric Institute), Faculteit der Economische Wetenschappen (Erasmus School of Economics) Erasmus Universiteit, Tinbergen Instituut (Tinbergen Institute).); Juan-Ángel Jiménez-Martín (Departamento de Economía Cuantitativa (Department of Quantitative Economics), Facultad de Ciencias Económicas y Empresariales (Faculty of Economics and Business), Universidad Complutense de Madrid); Chia-Lin Chang (NCHU Department of Applied Economics (Taiwan)); Teodosio Pérez-Amaral (Departamento de Economía Cuantitativa (Department of Quantitative Economics), Facultad de Ciencias Económicas y Empresariales (Faculty of Economics and Business), Universidad Complutense de Madrid)
    Abstract: The Basel II Accord requires that banks and other Authorized Deposit-taking Institutions (ADIs) communicate their daily risk forecasts to the appropriate monetary authorities at the beginning of each trading day, using one or more risk models to measure Value-at-Risk (VaR). The risk estimates of these models are used to determine capital requirements and associated capital costs of ADIs, depending in part on the number of previous violations, whereby realised losses exceed the estimated VaR. McAleer, Jimenez-Martin and Perez- Amaral (2009) proposed a new approach to model selection for predicting VaR, consisting of combining alternative risk models, and comparing conservative and aggressive strategies for choosing between VaR models. This paper addresses the question of risk management of risk, namely VaR of VIX futures prices. We examine how different risk management strategies performed during the 2008-09 global financial crisis (GFC). We find that an aggressive strategy of choosing the Supremum of the single model forecasts is preferred to the other alternatives, and is robust during the GFC. However, this strategy implies relatively high numbers of violations and accumulated losses, though these are admissible under the Basel II Accord.
    Keywords: Median strategy, Value-at-Risk (VaR), daily capital charges, violation penalties, optimizing strategy, aggressive risk management, conservative risk management, Basel II Accord, VIX futures, global financial crisis (GFC).
    JEL: G32 G11 C53 C22
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ucm:doicae:1102&r=for
  7. By: Oleg Kitov; Ivan Kitov
    Abstract: An empirical model is presented linking inflation and unemployment rate to the change in the level of labour force in Switzerland. The involved variables are found to be cointegrated and we estimate lagged linear deterministic relationships using the method of cumulative curves, a simplified version of the 1D Boundary Elements Method. The model yields very accurate predictions of the inflation rate on a three year horizon. The results are coherent with the models estimated previously for the US, Japan, France and other developed countries and provide additional validation of our quantitative framework based solely on labour force. Finally, given the importance of inflation forecasts for the Swiss monetary policy, we present a prediction extended into 2050 based on official projections of the labour force level.
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1102.5405&r=for
  8. By: Richhild Moessner; Feng Zhu; Colin Ellis
    Abstract: We provide a new perspective on disagreement in inflation expectations by examining the full probability distributions of UK consumer inflation forecasts based on an adaptive bootstrap multimodality test. Furthermore, we compare the inflation forecasts of the Bank of England's Monetary Policy Committee (MPC) with those of UK consumers, for which we use data from the 2001-2007 February GfK NOP consumer surveys. Our analysis indicates substantial disagreement among UK consumers, and between the MPC and consumers, concerning one-year- ahead inflation forecasts. Such disagreement persisted throughout the sample, with no signs of convergence, consistent with consumers' inflation expectations not being "well-anchored" in the sense of matching the central bank's expectations. UK consumers had far more diverse views about future inflation than the MPC. It is possible that the MPC enjoyed certain information advantages which allowed it to have a narrower range of inflation forecasts.
    Keywords: Adaptive kernel method, adaptive multimodality test, consumer survey, inflation forecasts, nonparametric density estimation
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:bis:biswps:339&r=for

This nep-for issue is ©2011 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.