nep-for New Economics Papers
on Forecasting
Issue of 2007‒08‒08
fifteen papers chosen by
Rob J Hyndman
Monash University

  1. Optimal combination forecasts for hierarchical time series By Rob J. Hyndman; Roman A. Ahmed; George Athanasopoulos
  2. Forecasting VARMA processes using VAR models and subspace-based state space models By Izquierdo, Segismundo S.; Hernández, Cesáreo; del Hoyo, Juan
  3. Two canonical VARMA forms: Scalar component models vis-à-vis the Echelon form By George Athanasopoulos; D.S. Poskitt; Farshid Vahid
  4. Exchange Rate Uncertainty and Trade Growth - A Comparison of Linear and Nonlinear (Forecasting) Models By Helmut Herwartz; Henning Weber
  5. Globalisation and Technology Intensity as Determinants of Exports By James Mitchell
  6. Modeling the supply and demand for tourism: a fully identified VECM approach By Allison Zhou; Carl Bonham; Byron Gangnes
  7. IMF Drawing Programs: Participation Determinants and Forecasting By Eugenio Cerutti
  8. Irregularly Spaced Intraday Value at Risk (ISIVaR) Models : Forecasting and Predictive Abilities By Christophe Hurlin; Gilbert Colletaz; Sessi Tokpavi
  9. Macro Factors and the Brazilian Yield Curve with no Arbitrage Models By Marcos S Matsumura; Ajax R. B. Moreira
  10. Financial Incentives and Cognitive Abilities: Evidence from a Forecasting Task with Varying Cognitive Load By Ondrej Rydval
  11. The Instrument-Rate Projection under Inflation Targeting: The Norwegian Example By Lars E.O. Svensson
  12. An Empirical Investigation of Value-at-Risk in Long and Short Trading Positions By Kulp-Tåg, Sofie
  13. Forecasting Cross-Section Stock Returns using The Present Value Model By George Bulkley; Richard Holt
  14. Robust M-estimation of multivariate conditionally heteroscedastic time series models with elliptical innovations By Boudt, Kris; Croux, Christophe
  15. Measuring Monetary Policy Stance in Brazil By Brisne J. V. Céspedes; Elcyon C. R. Lima; Alexis Maka; Mário J. C. Mendonça

  1. By: Rob J. Hyndman; Roman A. Ahmed; George Athanasopoulos
    Abstract: In many applications, there are multiple time series that are hierarchically organized and can be aggregated at several different levels in groups based on products, geography or some other features. We call these "hierarchical time series". They are commonly forecast using either a "bottom-up" or a "top-down" method. In this paper we propose a new approach to hierarchical forecasting which provides optimal forecasts that are better than forecasts produced by either a top-down or a bottom-up approach. Our method is based on independently forecasting all series at all levels of the hierarchy and then using a regression model to optimally combine and reconcile these forecasts. The resulting revised forecasts add up appropriately across the hierarchy, are unbiased and have minimum variance amongst all combination forecasts under some simple assumptions. We show in a simulation study that our method performs well compared to the top-down approach and the bottom-up method. It also allows us to construct prediction intervals for the resultant forecasts. Finally, we apply the method to forecasting Australian tourism demand where the data are disaggregated by purpose of visit and geographical region.
    Keywords: Bottom-up forecasting, combining forecasts, GLS regression, hierarchical forecasting, Moore-Penrose inverse, reconciling forecasts, top-down forecasting.
    JEL: C53 C32 C23
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2007-9&r=for
  2. By: Izquierdo, Segismundo S.; Hernández, Cesáreo; del Hoyo, Juan
    Abstract: VAR modelling is a frequent technique in econometrics for linear processes. VAR modelling offers some desirable features such as relatively simple procedures for model specification (order selection) and the possibility of obtaining quick non-iterative maximum likelihood estimates of the system parameters. However, if the process under study follows a finite-order VARMA structure, it cannot be equivalently represented by any finite-order VAR model. On the other hand, a finite-order state space model can represent a finite-order VARMA process exactly, and, for state-space modelling, subspace algorithms allow for quick and non-iterative estimates of the system parameters, as well as for simple specification procedures. Given the previous facts, we check in this paper whether subspace-based state space models provide better forecasts than VAR models when working with VARMA data generating processes. In a simulation study we generate samples from different VARMA data generating processes, obtain VAR-based and state-space-based models for each generating process and compare the predictive power of the obtained models. Different specification and estimation algorithms are considered; in particular, within the subspace family, the CCA (Canonical Correlation Analysis) algorithm is the selected option to obtain state-space models. Our results indicate that when the MA parameter of an ARMA process is close to 1, the CCA state space models are likely to provide better forecasts than the AR models. We also conduct a practical comparison (for two cointegrated economic time series) of the predictive power of Johansen restricted-VAR (VEC) models with the predictive power of state space models obtained by the CCA subspace algorithm, including a density forecasting analysis.
    Keywords: subspace algorithms; VAR; forecasting; cointegration; Johansen; CCA
    JEL: C53 C5 C51
    Date: 2006–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:4235&r=for
  3. By: George Athanasopoulos; D.S. Poskitt; Farshid Vahid
    Abstract: In this paper we study two methodologies which identify and specify canonical form VARMA models. The two methodologies are: (i) an extension of the scalar component methodology which specifies canonical VARMA models by identifying scalar components through canonical correlations analysis and (ii) the Echelon form methodology which specifies canonical VARMA models through the estimation of Kronecker indices. We compare the actual forms and the methodologies on three levels. Firstly we present a theoretical comparison. Secondly, we present a Monte-Carlo simulation study that compares the performance of the two methodologies in identifying some pre-specified data generating processes. Lastly we compare the out-of-sample forecast performance of the two forms when models are fitted to real macroeconomic data.
    Keywords: Echelon form, Identification, Multivariate time series, Scalar component, VARMA model.
    JEL: C32 C51
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2007-10&r=for
  4. By: Helmut Herwartz; Henning Weber
    Abstract: A huge body of empirical and theoretical literature has emerged on the relationship between foreign exchange (FX) uncertainty and international trade. Empirical findings about the impact of FX uncertainty on trade figures are at best weak and often ambiguous with respect to its direction. Almost all empirical contributions assume and estimate a linear relationship. Possible nonlinearity or state dependence of causal links between FX uncertainty and trade has been mostly ignored yet. In addition, widely used regression models have not been evaluated in terms of ex-ante forecasting. In this paper we analyze the impact of FX uncertainty on sectoral categories of multilateral exports and imports for 15 industrialized economies. We particularly provide a comparison of linear and nonlinear models with respect to ex-ante forecasting. In terms of average ranks of absolute forecast errors nonlinear models outperform both, a common linear model and some specification building on the assumption that FX uncertainty and trade growth are uncorrelated. Our results support the view that the relationship of interest might be nonlinear and, moreover, lacks of homogeneity across countries, economic sectors and when contrasting imports vs. exports.
    Keywords: Exchange Rate Uncertainty, GARCH, Forecasting, International Trade, Nonlinear Models.
    JEL: F14 F17
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2007-042&r=for
  5. By: James Mitchell
    Abstract: We seek to understand what can be inferred from the movement of and revisions to fixed-event density forecasts. This involves extending efficiency tests used to examine fixed-event forecasts from the point to density case. The extension requires the revision to a density forecast to be reduced to a revision to an event forecast; this is because revisions to the density forecasts themselves, as measured by the KLIC, need not be unpredictable even when the forecast is rational. We consider an application to inflation and output growth density forecasts from the Survey of Professional Forecasters. We find that while fixed-event density forecasts for inflation appear inefficient, there is greater evidence that those for GDP growth are efficient. We also extract information from the fixed-event density forecasts about the SPF's perceptions of the persistence of inflation and output growth. These perceptions are related to the decline in macroeconomic volatility observed in the United States.
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:nsr:niesrd:296&r=for
  6. By: Allison Zhou (Department of Economics and University of Hawaii Economic Research Organization, University of Hawaii at Manoa); Carl Bonham (Department of Economics and University of Hawaii Economic Research Organization, University of Hawaii at Manoa); Byron Gangnes (Department of Economics and University of Hawaii Economic Research Organization, University of Hawaii at Manoa)
    Abstract: Cointegration analysis has gradually appeared in the empirical tourism literature. However, the focus has been exclusively on the demand side, neglecting potentially-important supply-side influences and risking endogeneity bias. One reason for this omission may be the difficulty identifying structural relationships in a system setting. We estimate a vector error correction model of the supply and demand for Hawaii tourism using a theory-directed sequential reduction method suggested by Hall et al. (2002). We compare forecasts for the selected model and for two competing models. Diebold and Mariano (1995) tests for forecast accuracy demonstrate the satisfactory performance of this approach.
    Keywords: catastrophe, Cointegration, Vector error correction model, Identification, Tourism demand and supply analysis, Hawaii
    JEL: C32 L83
    Date: 2007–07–20
    URL: http://d.repec.org/n?u=RePEc:hai:wpaper:200717&r=for
  7. By: Eugenio Cerutti
    Abstract: This paper studies the factors that have influenced countries' participation in IMF drawing programs. IMF drawing programs are defined as the period of a Stand-By Arrangement or an Extended Fund Facilities program during which a country borrows from the Fund. Since this definition excludes precautionary arrangements and periods during which the program went off-track, it provides a closer link to the factors that have influenced the evolution of IMF credit outstanding. The analysis uses quarterly data during the period 1982-2005 and focuses on developing, non-PRGF eligible countries. Country-specific variables-net international reserves and GDP growth-together with a global factor-world GDP growth-are found to be among the most significant determinants of countries' participation in IMF drawing programs. The importance of the global factor is not uniform during the period reviewed. The influence of world GDP growth seems to have been significant during the 1980s debt crises but not since the Mexican crisis in 1994. An out-of-sample forecast evaluation of the period 2004-5 reveals that the model has some predictive power.
    Date: 2007–07–11
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:07/152&r=for
  8. By: Christophe Hurlin (LEO - Laboratoire d'économie d'Orleans - [CNRS : UMR6221] - [Université d'Orléans]); Gilbert Colletaz (LEO - Laboratoire d'économie d'Orleans - [CNRS : UMR6221] - [Université d'Orléans]); Sessi Tokpavi (LEO - Laboratoire d'économie d'Orleans - [CNRS : UMR6221] - [Université d'Orléans])
    Abstract: The objective of this paper is to propose a market risk measure defined in price event time and a suitable backtesting procedure for irregularly spaced data. Firstly, we combine Autoregressive Conditional Duration models for price movements and a non parametric quantile estimation to derive a semi-parametric Irregularly Spaced Intraday Value at Risk (ISIVaR) model. This ISIVaR measure gives two information: the expected duration for the next price event and the related VaR. Secondly, we use a GMM approach to develop a backtest and investigate its finite sample properties through numerical Monte Carlo simulations. Finally, we propose an application to two NYSE stocks.
    Keywords: Value at Risk; High-frequency data; ACD models; Irregularly spaced market risk models; Backtesting
    Date: 2007–07–13
    URL: http://d.repec.org/n?u=RePEc:hal:papers:halshs-00162440_v1&r=for
  9. By: Marcos S Matsumura; Ajax R. B. Moreira
    Abstract: We use no arbitrage models with macro variables to study the interaction between the macroeconomy and the yield curve. This interaction is a key element for monetary policy and for forecasting. The model was used to analyze the Brazilian domestic financial market using a daily dataset and two versions of the model, one in continuous-time and estimated by maximum likelihood, and the other in discretetime and estimated by Monte Carlo Markov Chain (MCMC). Our objective is threefold: 1) To analyze the determinants of the Brazilian domestic term structure considering nominal shocks; 2) To compare the results of the discrete and the continuous time versions considering adherence, forecasting performance and monetary policy analysis; and 3) To evaluate the effect of restrictions on the transition and pricing equations over the model properties. Our main results are: 1) results from continuous and discrete versions are qualitatively and in most cases quantitatively equivalent; 2) Monetary Authorities are conservative in Brazil, smoothing short rate fluctuations; 3) inflation shock, or slope shock, depending on the model selected, are the main sources of long run fluctuations of nominal variables; and finally, 4) no arbitrage models showed lower forecasting performance than an unrestricted factor model.
    Date: 2006–08
    URL: http://d.repec.org/n?u=RePEc:ipe:ipetds:1210&r=for
  10. By: Ondrej Rydval (Max Planck Institute of Economics, Jena, Germany.)
    Abstract: I examine how financial incentives interact with intrinsic motivation and especially cognitive abilities in explaining heterogeneity in performance. Using a forecasting task with varying cognitive load, I show that the effectiveness of high-powered financial incentives as a stimulator of economic performance can be moderated by cognitive abilities in a causal fashion. Identifying the causality of cognitive abilities is a prerequisite for studying their interaction with financial and intrinsic incentives in a unifying framework, with implications for the design of efficient incentive schemes.
    Keywords: Financial incentives, Cognitive ability, Heterogeneity, Performance, Experiment
    JEL: C81 C91 D83
    Date: 2007–07–18
    URL: http://d.repec.org/n?u=RePEc:jrp:jrpwrp:2007-040&r=for
  11. By: Lars E.O. Svensson (Princeton University, CEPR, and NBER)
    Abstract: The introduction of inflation targeting has led to major progress in practical monetary policy. Recent debate has focused on the interest-rate assumption underlying published projections of inflation and other target variables. This paper discusses the role of alternative interest-rate paths in the monetary-policy decision process and the recent publication by Norges Bank (the central bank of Norway) of optimal interest-rate projections with fan charts.
    Keywords: Forecasts, flexible inflation targeting, optimal monetary policy.
    JEL: E42 E52 E58
    Date: 2006–05
    URL: http://d.repec.org/n?u=RePEc:pri:cepsud:75&r=for
  12. By: Kulp-Tåg, Sofie (Swedish School of Economics and Business Administration)
    Abstract: This paper uses the Value-at-Risk approach to define the risk in both long and short trading positions. The investigation is done on some major market indices(Japanese, UK, German and US). The performance of models that takes into account skewness and fat-tails are compared to symmetric models in relation to both the specific model for estimating the variance, and the distribution of the variance estimate used as input in the VaR estimation. The results indicate that more flexible models not necessarily perform better in predicting the VaR forecast; the reason for this is most probably the complexity of these models. A general result is that different methods for estimating the variance are needed for different confidence levels of the VaR, and for the different indices. Also, different models are to be used for the left respectively the right tail of the distribution.
    Keywords: Value-at-Risk; asymmetry; Exponential GARCH; Asymmetric Power ARCH; long-trading; short-trading
    Date: 2007–04–13
    URL: http://d.repec.org/n?u=RePEc:hhb:hanken:0526&r=for
  13. By: George Bulkley; Richard Holt
    Abstract: We contribute to the debate over whether forecastable stock returns reflect an unexploited profit opportunity or rationally reflect risk differentials. We test whether agents could earn excess returns by selecting stocks which have a low market price compared to an estimate of the fundamental value obtained from the present value model. The criterion for stock picking is one which could actually have been implemented by agents in real time. We show that statistically significant, and quantitatively substantial, excess returns are delivered by portfolios of stocks which are cheap relative to our estimate of fundamental value. There is no evidence that the under priced stocks are relatively risky and hence excess returns cannot easily be interpreted as an equilibrium compensation for risk.
    Keywords: Excess returns, Trading rule, Efficient markets, Present value model, Stock prices.
    JEL: G12 G14
    URL: http://d.repec.org/n?u=RePEc:edn:esedps:163&r=for
  14. By: Boudt, Kris; Croux, Christophe
    Abstract: This paper proposes new methods for the econometric analysis of outlier contaminated multivariate conditionally heteroscedastic time series. Robust alternatives to the Gaussian quasi-maximum likelihood estimator are presented. Under elliptical symmetry of the innovation vector, consistency results for M-estimation of the general conditional heteroscedasticity model are obtained. We also propose a robust estimator for the cross-correlation matrix and a diagnostic check for correct specification of the innovation density function. In a Monte Carlo experiment, the effect of outliers on different types of M-estimators is studied. We conclude with a financial application in which these new tools are used to analyse and estimate the symmetric BEKK model for the 1980-2006 series of weekly returns on the Nasdaq and NYSE composite indices. For this dataset, robust estimators are needed to cope with the outlying returns corresponding to the stock market crash in 1987 and the burst of the dotcom bubble in 2000.
    Keywords: conditional heteroscedasticity; M-estimators; multivariate time series; outliers; quasi-maximum likelihood; robust methods
    JEL: C51 C13 C53 C32
    Date: 2007–07–27
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:4271&r=for
  15. By: Brisne J. V. Céspedes; Elcyon C. R. Lima; Alexis Maka; Mário J. C. Mendonça
    Abstract: In this article we use the theory of conditional forecasts to develop a new Monetary Conditions Index (MCI) for Brazil and compare it to the ones constructed using the methodologies suggested by Bernanke and Mihov (1998) and Batini and Turnbull (2002). We use Sims and Zha (1999) and Waggoner and Zha (1999) approaches to develop and compute Bayesian error bands for the MCIs. The new indicator we develop is called the Conditional Monetary Conditions Index (CMCI) and is constructed using, alternatively, Structural Vector Autoregressions (SVARs) and Forward-Looking (FL) models. The CMCI is the forecasted output gap, conditioned on observed values of the nominal interest rate (the Selic rate) and of the real exchange rate. We show that the CMCI, when compared to the MCI developed by Batini and Turnbull (2002), is a better measure of monetary policy stance because it takes into account the endogeneity of variables involved in the analysis. The CMCI and the Bernanke and Mihov MCI (BMCI), despite conceptual differences, show similarities in their chronology of the stance of monetary policy in Brazil. The CMCI is a smoother version of the BMCI, possibly because the impact of changes in the observed values of the Selic rate is partially compensated by changes in the value of the real exchange rate. The Brazilian monetary policy, in the 2000:9- 2005:4 period and according to the last two indicators, has been expansionary near election months.
    Date: 2005–10
    URL: http://d.repec.org/n?u=RePEc:ipe:ipetds:1128&r=for

This nep-for issue is ©2007 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.