nep-for New Economics Papers
on Forecasting
Issue of 2010‒04‒17
fifteen papers chosen by
Rob J Hyndman
Monash University

  1. Evaluating Macroeconomic Forecast: A Review of Some Recent Developments By Franses, Ph.H.B.F.; McAleer, M.J.; Legerstee, R.
  2. Forecasting Realized Volatility with Linear and Nonlinear Models By Michael McAleer; Marcelo Cunha Medeiros
  3. "Are Forecast Updates Progressive?" By Chia-Lin Chang; Philip Hans Franses; Michael McAleer
  4. Model selection, estimation and forecasting in VAR models with short-run and long-run restrictions By Athanasopoulos, George; Guillén, Osmani Teixeira de Carvalho; Issler, João Victor; Vahid, Farshid
  5. The Response of Retail Interest Rates to Factor Forecasts of Money Market Rates in Major European Economies By Anindya Banerjee; Victor Bystrov; Paul Mizen
  6. Composite Leading Indicator for the Austrian Economy. Methodology and "Real-time" Performance By Jürgen Bierbaumer-Polly
  7. Expectations and economic fluctuations: an analysis using survey data By Sylvain Leduc; Keith Sill
  8. An Introduction to the New Zealand Treasury Model By Michael Ryan; Kam Leong Szeto
  9. Perspectives on Evaluating Macroeconomic Forecasts By Herman Stekler
  10. Time-Varying Expected Returns: Evidence from the U.S. and the U.K By Ricardo M. Sousa
  11. Predicting Agri-Commodity Prices: an Asset Pricing Approach By Yu-chin Chen; Kenneth Rogoff; Barbara Rossi
  12. Predicting extreme VaR: Nonparametric quantile regression with refinements from extreme value theory By Julia Schaumburg
  13. The Impact of Insider Trading on Forecasting in a Bookmakers' Horse Betting Market By A. SCHNYTZER; M. LAMERS; V. MAKROPOULOU;
  14. Correcting for Survey Effects in Pre-election Polls By Heij, C.; Franses, Ph.H.B.F.
  15. Financial Market Conditions, Real Time, Nonlinearity and European Central Bank Monetary Policy: In-Sample and Out-of-Sample Assessment By Costas Milas; Ruthira Naraidoo

  1. By: Franses, Ph.H.B.F.; McAleer, M.J.; Legerstee, R. (Erasmus Econometric Institute)
    Abstract: Macroeconomic forecasts are frequently produced, published, discussed and used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are based on econometric model forecasts as well as on human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model, the other forecast, and intuition; and (iii) the two forecasts are generated from two distinct combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth
    Keywords: macroeconomic forecasts;econometric models;human intuition;biased forecasts;forecast performance;forecast evaluation;forecast comparison
    Date: 2010–03–30
  2. By: Michael McAleer (Econometric Institute, Erasmus University Rotterdam); Marcelo Cunha Medeiros (Department of Economics PUC-Rio)
    Abstract: In this paper we consider a nonlinear model based on neural networks as well as linear models to forecast the daily volatility of the S&P 500 and FTSE 100 indexes. As a proxy for daily volatility, we consider a consistent and unbiased estimator of the integrated volatility that is computed from high frequency intra-day returns. We also consider a simple algorithm based on bagging (bootstrap aggregation) in order to specify the models analyzed in this paper.
    Keywords: Financial econometrics, volatility forecasting, neural networks, nonlinear models, realized volatility, bagging.
    Date: 2010–03
  3. By: Chia-Lin Chang (Department of Applied Economics, National Chung Hsing University); Philip Hans Franses (Erasmus School of Economics, Erasmus University Rotterdam); Michael McAleer (Econometric Institute, Erasmus University Rotterdam and Tinbergen Institute)
    Abstract: Macro-economic forecasts typically involve both a model component, which is replicable, as well as intuition, which is non-replicable. Intuition is expert knowledge possessed by a forecaster. If forecast updates are progressive, forecast updates should become more accurate, on average, as the actual value is approached. Otherwise, forecast updates would be neutral. The paper proposes a methodology to test whether forecast updates are progressive and whether econometric models are useful in updating forecasts. The data set for the empirical analysis are for Taiwan, where we have three decades of quarterly data available of forecasts and updates of the inflation rate and real GDP growth rate. The actual series for both the inflation rate and the real GDP growth rate are always released by the government one quarter after the release of the revised forecast, and the actual values are not revised after they have been released. Our empirical results suggest that the forecast updates for Taiwan are progressive, and can be explained predominantly by intuition. Additionally, the one-, two- and three-quarter forecast errors are predictable using publicly available information for both the inflation rate and real GDP growth rate, which suggests that the forecasts can be improved.
    Date: 2010–04
  4. By: Athanasopoulos, George; Guillén, Osmani Teixeira de Carvalho; Issler, João Victor; Vahid, Farshid
    Abstract: We study the joint determination of the lag length, the dimension of the cointegrating spaceand the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model usingmodel selection criteria. We consider model selection criteria which have data-dependent penaltiesas well as the traditional ones. We suggest a new two-step model selection procedure which is ahybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency.Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arisefrom the joint determination of lag-length and rank using our proposed procedure, relative to anunrestricted VAR or a cointegrated VAR estimated by the commonly used procedure of selecting thelag-length only and then testing for cointegration. Two empirical applications forecasting Brazilianin ation and U.S. macroeconomic aggregates growth rates respectively show the usefulness of themodel-selection strategy proposed here. The gains in di¤erent measures of forecasting accuracy aresubstantial, especially for short horizons.
    Date: 2010–03–29
  5. By: Anindya Banerjee; Victor Bystrov; Paul Mizen
    Abstract: The recent financial crisis has underlined that banks no longer simply accumulate deposits and lend a fraction to their clients. Instead they use interbank markets and structured finance to increase their loan book. This has implications for the understanding of interest rate pass through since a large number of interest rates and macro variables influence the retail rates they set on loans and deposits. This paper uses Stock-Watson factor forecasts to predict market interest rates which are then used as the basis for setting retail rates. We find a significant role for forecasts of future interest rates in determining short- and long-run pass through, and we argue that models which do not include future rates are misspecified.
    Keywords: forecasting, factor models, interest rate pass-through
    Date: 2010–01–27
  6. By: Jürgen Bierbaumer-Polly
    Abstract: This paper describes the methodologies used for constructing a composite leading indicator for the Austrian economy (CLI-AT). First, a selection of those monthly indicators which overall fare best in showing a "steady" leading behaviour with respect to the Austrian business cycle was performed. The analysis was carried out by means of statistical methods out of the timeseries domain as well as from the frequency domain. Thirteen series have been finally classified as leading indicators. Among them, business and consumer survey data form the most prevalent group. Second, I construct the CLI-AT based on the de-trended, normalised and weighted leading series. For the de-trending procedure I use the HP filter and the weights have been obtained by means of principal components analysis. Further, idiosyncratic elements in the CLI-AT have been removed along with checking the endpoint-bias due to the HP filter smoothing procedure. I find that the "real-time" smoothed CLI-AT does not exhibit severe phase-shifts compared to a full-sample estimate. Next, I show that the CLI-AT provides a useful instrument for assessing the current and likely future direction in the Austrian business cycle. Over the period 1988-2008, the CLI-AT indicates cyclical turns with a "steady" lead in the majority of cases. Finally, in using an out-of-sample forecasting exercise it is shown that the CLI-AT carries important business cycle information and that its inclusion in a forecasting model can increase the projection quality of the underlying reference series.
    Keywords: Business cycles, turning points, cyclical analysis, leading indicators, composite indicators, HP filter, principal components, out-of-sample forecasting
    Date: 2010–04–06
  7. By: Sylvain Leduc; Keith Sill
    Abstract: Using survey-based measures of future U.S. economic activity from the Livingston Survey and the Survey of Professional Forecasters, we study how changes in expectations, and their interaction with monetary policy, contribute to fluctuations in macroeconomic aggregates. We find that changes in expected future economic activity are a quantitatively important driver of economic fluctuations: a perception that good times are ahead typically leads to a significant rise in current measures of economic activity and inflation. We also find that the short-term interest rate rises in response to expectations of good times as monetary policy tightens. Our results provide quantitative evidence on the importance of expectations-driven business cycles and on the role that monetary policy plays in shaping them.
    Keywords: Economic forecasting ; Monetary policy ; Business cycles
    Date: 2010
  8. By: Michael Ryan; Kam Leong Szeto (The Treasury)
    Abstract: The Treasury is the New Zealand government’s lead advisor on economic and financial issues. Part of this advice consists of providing the government with forecasts of economic and fiscal variables. Economic forecasts are important, not only as a basis for forecasts of tax revenue, but also in informing the government of the macroeconomic environment in which proposed fiscal policy settings will operate. The New Zealand Treasury Model (NZTM) is an important part of the economic forecasting process at the Treasury. This paper has three purposes. The first is to give readers an idea of the key features of NZTM. The second is to detail major changes to the model since the last published documentation of the model (Szeto, 2002). These model developments have enhanced NZTM to provide more detailed forecasts. Key changes include the disaggregation of deflators into the various expenditure GDP components, the introduction of consumption and capital goods imports into the model (rather than just treating them as intermediate imports) and the disaggregation of the inflation equation into tradable and non-tradable components. The final purpose of this paper is to outline briefly NZTM’s role in the Treasury’s forecasting process.
    Keywords: Computable general equilibrium model; New Zealand economy; forecasting
    JEL: C68 E17 E2
    Date: 2009–10
  9. By: Herman Stekler (Department of Economics The George Washington University)
    Abstract: Over the past 50 or so years, I have been concerned with the quality of economic forecasts and have written both about the procedures for evaluating these predictions and the results that were obtained from these evaluations. In this paper I provide some perspectives on the issues involved in judging the quality of these forecasts. These include the reasons for evaluating forecasts, the questions that have been asked in these evaluations, the statistical tools that have been used, and the generally accepted results. (I do also present some new material that has not yet been published.) I do this in two parts: first focusing on short-run GDP and inflation predictions and then turning to labor market forecasts.
    Date: 2010–03
  10. By: Ricardo M. Sousa (Universidade do Minho - NIPE)
    Abstract: I assess the relative performance of several empirical proxies developed in the literature of asset pricing to capture time-variation in expected future returns using data for the U.S. and the U.K.. I show that the wealth composition risk by Sousa (2010) exhibits strong forecasting power.
    Keywords: asset pricing, wealth, empirical proxies, expected returns.
    JEL: E21 E44 D12
    Date: 2010
  11. By: Yu-chin Chen (University of Washington); Kenneth Rogoff (Harvard University); Barbara Rossi (Duke University)
    Abstract: Volatile and rising agricultural prices place significant strain on the global fight against poverty. An accurate reading of future food price movements would thus be an invaluable budgetary planning tool for government agencies and food aid programs aimed at alleviating hunger. Using the asset-pricing approach developed in Chen, Rogoff and Rossi (2010), we show that information from the currency and equity markets of several commodity-exporting economies can offer powerful help in forecasting world agricultural prices. Our formulation builds upon the notion that because these countries' currency and equity valuations depend on the world price of their commodity exports, market participants would incorporate expected future commodity price movements into the current values of these assets. As the exchange rate and equity markets are typically much more fluid than the agri-commodity markets (where prices tend to be more constrained by current supply and demand conditions), these asset prices can signal future agricultural price dynamics beyond information contained in the agri-commodity prices themselves. Our findings complement forecast methods based on structural factors such as supply, demand, and storage considerations.
    Date: 2009–03
  12. By: Julia Schaumburg
    Abstract: This paper studies the performance of nonparametric quantile regression as a tool to predict Value at Risk (VaR). The approach is flexible as it requires no assumptions on the form of return distributions. A monotonized double kernel local linear estimator is applied to estimate moderate (1%) conditional quantiles of index return distributions. For extreme (0.1%) quantiles, where particularly few data points are available, we propose to combine nonparametric quantile regression with extreme value theory. The out-of-sample forecasting performance of our methods turns out to be clearly superior to different specifications of the Conditionally Autoregressive VaR (CAViaR) models.
    Keywords: Value at Risk, nonparametric quantile regression, risk management, extreme value theory, monotonization, CAViaR
    JEL: C14 C22 C52 C53
    Date: 2010–02
    Abstract: This paper uses a new variable based on estimates of insider trading to forecast the outcome of horse races. We base our analysis on Schnytzer, Lamers and Makropoulou (2008) who showed that inside trading in the 1997-1998 Australian racetrack betting market represents somewhere between 20 and 30 percent of all trading in this market. They show that the presence of insiders leads opening prices to deviate from true winning probabilities. Under these circumstances, forecasting of race outcomes should take into account an estimate of the extent of insider trading per horse. We show that the added value of this new variable for profitable betting is sufficient to reduce the losses when only prices are taken into account. Since the only variables taken into account in either Schnytzer, Lamers and Makropoulou (2008) or this paper are price data, this is tantamount to a demonstration that the market is, in practice, weak-form efficient.
    Date: 2009–12
  14. By: Heij, C.; Franses, Ph.H.B.F. (Erasmus Econometric Institute)
    Abstract: Pre-election polls can suffer from survey effects. For example, surveyed individuals can become more aware of the upcoming election so that they become more inclined to vote. These effects may depend on factors like political orientation and prior intention to vote, and this may cause biases in forecasts of election outcomes. We advocate a simple methodology to estimate the magnitude of these survey effects, which can be taken into account when translating future poll results into predicted election outcomes. The survey effects are estimated by collecting survey data both before and after the election. We illustrate our method by means of a field study with data concerning the 2009 European Parliament elections in the Netherlands. Our study provides empirical evidence of significant positive survey effects with respect to voter participation, especially for individuals with low intention to vote. For our data, the overall survey effect on party shares is small. This effect can be more substantial for less balanced survey samples, for example, if political orientation and voting intention are correlated in the sample. We conclude that pre-election polls that do not correct for survey effects will overestimate voter turnout and will have biased party shares.
    Keywords: pre-election polls;survey effects;intention modification;self-prophecy;data collection;turnout forecast;bias correction
    Date: 2010–03–31
  15. By: Costas Milas (Economics Group, Keele Management School, Keele University, UK and Rimini Centre for Economic Analysis, Rimini, Italy); Ruthira Naraidoo (Department of Economics, University of Pretoria)
    Abstract: We explore how the ECB sets interest rates in the context of policy reaction functions. Using both real-time and revised information, we consider linear and nonlinear policy functions in inflation, output and a measure of financial conditions. We find that amongst Taylor rule models, linear and nonlinear models are empirically indistinguishable within sample and that model specifications with real-time data provide the best description of in-sample ECB interest rate setting behavior. The 2007-2009 financial crisis witnesses a shift from inflation targeting to output stabilisation and a shift, from an asymmetric policy response to financial conditions at high inflation rates, to a more symmetric response irrespectively of the state of inflation. Finally, without imposing an a priori choice of parametric functional form, semiparametric models forecast out-of-sample better than linear and nonlinear Taylor rule models.
    Keywords: monetary policy, nonlinearity, real time data, financial conditions
    JEL: C51 C52 C53 E52 E58
    Date: 2009–10

This nep-for issue is ©2010 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.