nep-for New Economics Papers
on Forecasting
Issue of 2015‒07‒04
eleven papers chosen by
Rob J Hyndman
Monash University

  1. Can Oil Prices Help Predict US Stock Market Returns: An Evidence Using a DMA Approach By Naser, Hanan; Alaali, Fatema
  2. The Informational Content of the Term-Spread in Forecasting the U.S. Inflation Rate: A Nonlinear Approach By Periklis Gogas; Theophilos Papadimitriou; Vasilios Plakandaras; Rangan Gupta
  3. Robustness in Foreign Exchange Rate Forecasting Models: Economics-based Modelling After the Financial Crisis By Medel, Carlos; Camilleri, Gilmour; Hsu, Hsiang-Ling; Kania, Stefan; Touloumtzoglou, Miltiadis
  4. Robust Forecast Comparison By Sainan Jin; Valentina Corradi; Norman Swanson
  5. Pitfalls and Possibilities in Predictive Regression By Peter C. B. Phillips
  6. The information content of money and credit for US activity By Albuquerque, Bruno; Baumann, Ursel; Seitz, Franz
  7. A SVAR approach to evaluation of monetary policy in India By William A. Barnett; Soumya Suvra Bhadury; Taniya Ghosh
  8. Revisiting the transitional dynamics of business-cycle phases with mixed frequency data By Bessec, Marie
  9. A DARE for VaR By Hamidi, Benjamin; Hurlin, Christophe; Kouontchou, Patrick; Maillet, Bertrand
  10. Portfolio optimization using local linear regression ensembles in RapidMiner By Gabor Nagy; Gergo Barta; Tamas Henk
  11. On the importance of the probabilistic model in identifying the most decisive game in a tournament By Francisco Corona; Juan de Dios Tena; Michael P. Wiper

  1. By: Naser, Hanan; Alaali, Fatema
    Abstract: Crude oil price behaviour has fluctuated wildly since 1973 which has a major impact on key macroeconomic variables. Although the relationship between stock market returns and oil price changes has been scrutinized excessively in the literature, the possibility of predicting future stock market returns using oil prices has attracted less attention. This paper investigates the ability of oil prices to predict S&P 500 price index returns with the use of other macroeconomic and financial variables. Including all the potential variables in a forecasting model may result in an over-fitted model. So instead, dynamic model averaging and dynamic model selection are applied to utilize their ability of allowing the best forecasting model to change over time while parameters are also allowed to change. The empirical evidence shows that applying the DMA/DMS approach leads to significant improvements in forecasting performance in comparison to other forecasting methodologies and the performance of these models are better when oil prices are included within predictors.
    Keywords: Bayesian methods, Econometric models, Macroeconomic forecasting, Kalman filter, Model selection, Dynamic model averaging, Stock returns predictability, Oil prices
    JEL: C11 C53 G17 Q43
    Date: 2015–01–19
  2. By: Periklis Gogas (Department of Economics, Democritus University of Thrace, Greece); Theophilos Papadimitriou (Department of Economics, Democritus University of Thrace, Greece); Vasilios Plakandaras (Department of Economics, Democritus University of Thrace, Greece); Rangan Gupta (Department of Economics, University of Pretoria)
    Abstract: The difficulty in modelling inflation and the significance in discovering the underlying data generating process of inflation is expressed in an ample literature regarding inflation forecasting. In this paper we evaluate nonlinear machine learning and econometric methodologies in forecasting the U.S. inflation based on autoregressive and structural models of the term structure. We employ two nonlinear methodologies: the econometric Least Absolute Shrinkage and Selection Operator (LASSO) and the machine learning Support Vector Regression (SVR) method. The SVR has never been used before in inflation forecasting considering the term--spread as a regressor. In doing so, we use a long monthly dataset spanning the period 1871:1 – 2015:3 that covers the entire history of inflation in the U.S. economy. For comparison reasons we also use OLS regression models as benchmark. In order to evaluate the contribution of the term-spread in inflation forecasting in different time periods, we measure the out-of-sample forecasting performance of all models using rolling window regressions. Considering various forecasting horizons, the empirical evidence suggests that the structural models do not outperform the autoregressive ones, regardless of the model’s method. Thus we conclude that the term-spread models are not more accurate than autoregressive ones in inflation forecasting.
    Keywords: U.S. Inflation, forecasting, Support Vector Regression, LASSO
    JEL: C22 C45 C53 E31 E37
    Date: 2015–06
  3. By: Medel, Carlos; Camilleri, Gilmour; Hsu, Hsiang-Ling; Kania, Stefan; Touloumtzoglou, Miltiadis
    Abstract: The aim of this article is to analyse the out-of-sample behaviour of a bunch of statistical and economics-based models when forecasting exchange rates (FX) for the UK, Japan, and the Euro Zone in relation to the US. A special focus is given to the commodity prices boom of 2007-8 and the financial crisis of 2008-9. We analyse the forecasting behaviour of six economic plus three statistical models when forecasting from one up to 60-steps-ahead, using a monthly dataset comprising from 1981.1 to 2014.6. We first analyse forecasting errors until mid-2006 to then compare to those obtained until mid-2014. Our six economics-based models can be classified in three groups: interest rate spreads, monetary fundamentals, and PPP with global measures. Our results indicate that there are indeed changes of the first best models when considering the different spans. Interest rate models tend to be better predicting using the short sample; also showing a better tracking when crisis hit. With the longer sample the models based on price differentials are more promising; however, with heterogeneous results across countries. These results are important since shed some light on what model specification use when facing different FX volatility.
    Keywords: Foreign exchange rates; Economic forecasting; Financial crisis
    JEL: C32 C53 E17 E37
    Date: 2015–06–07
  4. By: Sainan Jin (Singapore Management University); Valentina Corradi (University of Surrey); Norman Swanson (Rutgers University)
    Abstract: Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence of the use of misspecified models in multiple model comparisons, relative forecast rankings are loss function dependent. This paper addresses this issue by using a novel criterion for forecast evaluation which is based on the entire distribution of forecast errors. We introduce the concepts of general-loss (GL) forecast superiority and convex-loss (CL) forecast superiority, and we establish a mapping between GL (CL) superiority and first (second) order stochastic dominance. This allows us to develop a forecast evaluation procedure based on an out-of-sample generalization of the tests introduced by Linton, Maasoumi and Whang (2005). The asymptotic null distributions of our test statistics are nonstandard, and resampling procedures are used to obtain the critical values. Additionally, the tests are consistent and have nontrivial local power under a sequence of local alternatives. In addition to the stationary case, we outline theory extending our tests to the case of heterogeneity induced by distributional change over time. Monte Carlo simulations suggest that the tests perform reasonably well in finite samples; and an application to exchange rate data indicates that our tests can help identify superior forecasting models, regardless of loss function.
    Keywords: convex loss function, empirical processes, forecast superiority, general loss function
    JEL: C12 C22
    Date: 2015–05–13
  5. By: Peter C. B. Phillips (Cowles Foundation, Yale University)
    Abstract: Financial theory and econometric methodology both struggle in formulating models that are logically sound in reconciling short run martingale behaviour for financial assets with predictable long run behavior, leaving much of the research to be empirically driven. The present paper overviews recent contributions to this subject, focussing on the main pitfalls in conducting predictive regression and on some of the possibilities offered by modern econometric methods. The latter options include indirect inference and techniques of endogenous instrumentation that use convenient temporal transforms of persistent regressors. Some additional suggestions are made for bias elimination, quantile crossing amelioration, and control of predictive model misspecification.
    Keywords: Bias, Endogenous instrumentation, Indirect inference, IVX estimation, Local unit roots, Mild integration, Prediction, Quantile crossing, Unit roots, Zero coverage probability
    JEL: C22 C23
    Date: 2015–06
  6. By: Albuquerque, Bruno; Baumann, Ursel; Seitz, Franz
    Abstract: We analyse the forecasting power of different monetary aggregates and credit variables for US GDP. Special attention is paid to the influence of the recent financial market crisis. For that purpose, in the first step we use a three-variable single-equation framework with real GDP, an interest rate spread and a monetary or credit variable, in forecasting horizons of one to eight quarters. This first stage thus serves to pre-select the variables with the highest forecasting content. In a second step, we use the selected monetary and credit variables within different VAR models, and compare their forecasting properties against a benchmark VAR model with GDP and the term spread. Our findings suggest that narrow monetary aggregates, as well as different credit variables, comprise useful predictive information for economic dynamics beyond that contained in the term spread. However, this finding only holds true in a sample that includes the most recent financial crisis. Looking forward, an open question is whether this change in the relationship between money, credit, the term spread and economic activity has been the result of a permanent structural break or whether we might go back to the previous relationships. JEL Classification: E41, E52, E58
    Keywords: credit, forecasting, money
    Date: 2015–06
  7. By: William A. Barnett (University of Kansas); Soumya Suvra Bhadury (University of Kansas); Taniya Ghosh (Indira Gandhi Institute of Development Research)
    Abstract: After almost 15 years, following the flagship exchange-rate paper written by Kim and Roubini (K&R henceforth); we revisit the widely relevant questions on monetary policy, exchange rate delayed overshooting, inflationary puzzle and weak monetary transmission mechanism in the Indian context. We further try to incorporate a superior form of the monetary measure called the Divisia monetary aggregate in the K&R setup. Our paper still rediscovers the efficacy of K&R contemporaneous restriction (customized for the Indian economy which is a developing G-20 nation unlike advanced G-6 nations that K&R worked with) especially when we compared with the recursive structure (which is plagued by price puzzle and exchange rate puzzle). The importance of bringing back 'Money' in the exchange rate model especially correctly measured monetary aggregate is convincingly illustrated when we contested across models with no-money, simple-sum monetary models and Divisia monetary models; in terms of impulse response (eliminating some of the persistent puzzles), variance decomposition analysis (policy variable explaining more of the exchange rate fluctuation) and out-of-sample forecasting (LER forecasting graph). Further, we do a flip-flop variance decomposition analysis, which leads us to conclude two important phenomena in the Indian economy, (i) weak link between the nominal-policy variable and the real-economic activity (ii) Indian monetary authority had inflation-targeting as one of their primary goals, in tune with the RBI Act. These two main results are robust, holding across different time period, dissimilar monetary aggregates and diverse exogenous model setups.
    Keywords: Monetary Policy; Monetary Aggregates; Divisia; Structural VAR; Exchange Rate Overshooting; Liquidity Puzzle; Price Puzzle; Exchange Rate Puzzle; Forward Discount Bias Puzzle
    JEL: C32 E41 E51 E52 F31 F41 F47
    Date: 2015–06
  8. By: Bessec, Marie
    Abstract: This paper introduces a Markov-Switching model where transition probabilities depend on higher frequency indicators and their lags, through polynomial weighting schemes. The MSV-MIDAS model is estimated via maximum likel ihood methods. The estimation relies on a slightly modified version of Hamilton’s recursive filter. We use Monte Carlo simulations to assess the robustness of the estimation procedure and related test-statistics. The results show that ML provides accurate estimates, but they suggest some caution in the tests on the parameters involved in the transition probabilities. We apply this new model to the detection and forecast of business cycle turning points. We properly detect recessions in United States and United Kingdom by exploiting the link between GDP growth and higher frequency variables from financial and energy markets. Spread term is a particularly useful indicator to predict recessions in the United States, while stock returns have the strongest explanatory power around British turning points.
    Keywords: Markov-Switching; mixed frequency data; business cycles;
    JEL: C22 E32 E37
    Date: 2015–06
  9. By: Hamidi, Benjamin; Hurlin, Christophe; Kouontchou, Patrick; Maillet, Bertrand
    Abstract: This paper introduces a new class of models for the Value-at-Risk (VaR) and Expected Shortfall (ES), called the Dynamic AutoRegressive Expectiles (DARE) models. Our approach is based on a weighted average of expectile-based VaR and ES models, i.e. the Conditional Autoregressive Expectile (CARE) models introduced by Taylor (2008a) and Kuan et al. (2009). First, we briefly present the main non-parametric, parametric and semi-parametric estimation methods for VaR and ES. Secondly, we detail the DARE approach and show how the expectiles can be used to estimate quantile risk measures. Thirdly, we use various backtesting tests to compare the DARE approach to other traditional methods for computing VaR forecasts on the French stock market. Finally, we evaluate the impact of several conditional weighting functions and determine the optimal weights in order to dynamically select the more relevant global quantile model.
    Keywords: Expected Shortfall; Value-at-Risk; Expectile; Risk Measures; Backtests;
    JEL: C14 C15 C50 C61 G11
    Date: 2015
  10. By: Gabor Nagy; Gergo Barta; Tamas Henk
    Abstract: In this paper we implement a Local Linear Regression Ensemble Committee (LOLREC) to predict 1-day-ahead returns of 453 assets form the S&P500. The estimates and the historical returns of the committees are used to compute the weights of the portfolio from the 453 stock. The proposed method outperforms benchmark portfolio selection strategies that optimize the growth rate of the capital. We investigate the effect of algorithm parameter m: the number of selected stocks on achieved average annual yields. Results suggest the algorithm's practical usefulness in everyday trading.
    Date: 2015–06
  11. By: Francisco Corona; Juan de Dios Tena; Michael P. Wiper
    Abstract: Identifying the important matches in international football tournaments is of great relevance for a variety of decision makers such as organizers, team coaches and/or media managers. This paper addresses this issue by analyzing the role of the statistical approach used to estimate the outcome of the game on the identification of decisive matches on international tournaments for national football teams. We extend the measure of decisiveness proposed by Geenens (2014) in order to allow us to predict or evaluate match importance before, during and after of a particular game on the tournament. Using information from the 2014 FIFA World Cup, our results suggest that Poisson and Kernel regressions significantly outperform the forecasts of ordered probit models. Moreover, we find that the identification of the key, but not most important, matches depends on the model considered. We also apply this methodology to identify the favorite teams and to predict the most important matches in 2015 Copa America before the start of the competition.
    Keywords: Game importance , Ordered probit model , Entropy , Poisson model , Kernel regression
    Date: 2015–06

This nep-for issue is ©2015 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.