Forecasting
http://lists.repec.org/mailman/listinfo/nep-for
Forecasting2015-07-04Rob J HyndmanCan Oil Prices Help Predict US Stock Market Returns: An Evidence Using a DMA Approach
http://d.repec.org/n?u=RePEc:pra:mprapa:65295&r=for
Crude oil price behaviour has fluctuated wildly since 1973 which has a major impact on key macroeconomic variables. Although the relationship between stock market returns and oil price changes has been scrutinized excessively in the literature, the possibility of predicting future stock market returns using oil prices has attracted less attention. This paper investigates the ability of oil prices to predict S&P 500 price index returns with the use of other macroeconomic and financial variables. Including all the potential variables in a forecasting model may result in an over-fitted model. So instead, dynamic model averaging and dynamic model selection are applied to utilize their ability of allowing the best forecasting model to change over time while parameters are also allowed to change. The empirical evidence shows that applying the DMA/DMS approach leads to significant improvements in forecasting performance in comparison to other forecasting methodologies and the performance of these models are better when oil prices are included within predictors.Naser, Hanan, Alaali, Fatema2015-01-19Bayesian methods, Econometric models, Macroeconomic forecasting, Kalman filter, Model selection, Dynamic model averaging, Stock returns predictability, Oil pricesThe Informational Content of the Term-Spread in Forecasting the U.S. Inflation Rate: A Nonlinear Approach
http://d.repec.org/n?u=RePEc:pre:wpaper:201548&r=for
The difficulty in modelling inflation and the significance in discovering the underlying data generating process of inflation is expressed in an ample literature regarding inflation forecasting. In this paper we evaluate nonlinear machine learning and econometric methodologies in forecasting the U.S. inflation based on autoregressive and structural models of the term structure. We employ two nonlinear methodologies: the econometric Least Absolute Shrinkage and Selection Operator (LASSO) and the machine learning Support Vector Regression (SVR) method. The SVR has never been used before in inflation forecasting considering the term--spread as a regressor. In doing so, we use a long monthly dataset spanning the period 1871:1 – 2015:3 that covers the entire history of inflation in the U.S. economy. For comparison reasons we also use OLS regression models as benchmark. In order to evaluate the contribution of the term-spread in inflation forecasting in different time periods, we measure the out-of-sample forecasting performance of all models using rolling window regressions. Considering various forecasting horizons, the empirical evidence suggests that the structural models do not outperform the autoregressive ones, regardless of the model’s method. Thus we conclude that the term-spread models are not more accurate than autoregressive ones in inflation forecasting.Periklis Gogas, Theophilos Papadimitriou, Vasilios Plakandaras, Rangan Gupta2015-06U.S. Inflation, forecasting, Support Vector Regression, LASSORobustness in Foreign Exchange Rate Forecasting Models: Economics-based Modelling After the Financial Crisis
http://d.repec.org/n?u=RePEc:pra:mprapa:65290&r=for
The aim of this article is to analyse the out-of-sample behaviour of a bunch of statistical and economics-based models when forecasting exchange rates (FX) for the UK, Japan, and the Euro Zone in relation to the US. A special focus is given to the commodity prices boom of 2007-8 and the financial crisis of 2008-9. We analyse the forecasting behaviour of six economic plus three statistical models when forecasting from one up to 60-steps-ahead, using a monthly dataset comprising from 1981.1 to 2014.6. We first analyse forecasting errors until mid-2006 to then compare to those obtained until mid-2014. Our six economics-based models can be classified in three groups: interest rate spreads, monetary fundamentals, and PPP with global measures. Our results indicate that there are indeed changes of the first best models when considering the different spans. Interest rate models tend to be better predicting using the short sample; also showing a better tracking when crisis hit. With the longer sample the models based on price differentials are more promising; however, with heterogeneous results across countries. These results are important since shed some light on what model specification use when facing different FX volatility.Medel, Carlos, Camilleri, Gilmour, Hsu, Hsiang-Ling, Kania, Stefan, Touloumtzoglou, Miltiadis2015-06-07Foreign exchange rates; Economic forecasting; Financial crisisRobust Forecast Comparison
http://d.repec.org/n?u=RePEc:rut:rutres:201502&r=for
Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence of the use of misspecified models in multiple model comparisons, relative forecast rankings are loss function dependent. This paper addresses this issue by using a novel criterion for forecast evaluation which is based on the entire distribution of forecast errors. We introduce the concepts of general-loss (GL) forecast superiority and convex-loss (CL) forecast superiority, and we establish a mapping between GL (CL) superiority and first (second) order stochastic dominance. This allows us to develop a forecast evaluation procedure based on an out-of-sample generalization of the tests introduced by Linton, Maasoumi and Whang (2005). The asymptotic null distributions of our test statistics are nonstandard, and resampling procedures are used to obtain the critical values. Additionally, the tests are consistent and have nontrivial local power under a sequence of local alternatives. In addition to the stationary case, we outline theory extending our tests to the case of heterogeneity induced by distributional change over time. Monte Carlo simulations suggest that the tests perform reasonably well in finite samples; and an application to exchange rate data indicates that our tests can help identify superior forecasting models, regardless of loss function.Sainan Jin, Valentina Corradi, Norman Swanson2015-05-13convex loss function, empirical processes, forecast superiority, general loss functionPitfalls and Possibilities in Predictive Regression
http://d.repec.org/n?u=RePEc:cwl:cwldpp:2003&r=for
Financial theory and econometric methodology both struggle in formulating models that are logically sound in reconciling short run martingale behaviour for financial assets with predictable long run behavior, leaving much of the research to be empirically driven. The present paper overviews recent contributions to this subject, focussing on the main pitfalls in conducting predictive regression and on some of the possibilities offered by modern econometric methods. The latter options include indirect inference and techniques of endogenous instrumentation that use convenient temporal transforms of persistent regressors. Some additional suggestions are made for bias elimination, quantile crossing amelioration, and control of predictive model misspecification.Peter C. B. Phillips2015-06Bias, Endogenous instrumentation, Indirect inference, IVX estimation, Local unit roots, Mild integration, Prediction, Quantile crossing, Unit roots, Zero coverage probabilityThe information content of money and credit for US activity
http://d.repec.org/n?u=RePEc:ecb:ecbwps:20151803&r=for
We analyse the forecasting power of different monetary aggregates and credit variables for US GDP. Special attention is paid to the influence of the recent financial market crisis. For that purpose, in the first step we use a three-variable single-equation framework with real GDP, an interest rate spread and a monetary or credit variable, in forecasting horizons of one to eight quarters. This first stage thus serves to pre-select the variables with the highest forecasting content. In a second step, we use the selected monetary and credit variables within different VAR models, and compare their forecasting properties against a benchmark VAR model with GDP and the term spread. Our findings suggest that narrow monetary aggregates, as well as different credit variables, comprise useful predictive information for economic dynamics beyond that contained in the term spread. However, this finding only holds true in a sample that includes the most recent financial crisis. Looking forward, an open question is whether this change in the relationship between money, credit, the term spread and economic activity has been the result of a permanent structural break or whether we might go back to the previous relationships. JEL Classification: E41, E52, E58Albuquerque, Bruno, Baumann, Ursel, Seitz, Franz2015-06credit, forecasting, moneyA SVAR approach to evaluation of monetary policy in India
http://d.repec.org/n?u=RePEc:ind:igiwpp:2015-016&r=for
After almost 15 years, following the flagship exchange-rate paper written by Kim and Roubini (K&R henceforth); we revisit the widely relevant questions on monetary policy, exchange rate delayed overshooting, inflationary puzzle and weak monetary transmission mechanism in the Indian context. We further try to incorporate a superior form of the monetary measure called the Divisia monetary aggregate in the K&R setup. Our paper still rediscovers the efficacy of K&R contemporaneous restriction (customized for the Indian economy which is a developing G-20 nation unlike advanced G-6 nations that K&R worked with) especially when we compared with the recursive structure (which is plagued by price puzzle and exchange rate puzzle). The importance of bringing back 'Money' in the exchange rate model especially correctly measured monetary aggregate is convincingly illustrated when we contested across models with no-money, simple-sum monetary models and Divisia monetary models; in terms of impulse response (eliminating some of the persistent puzzles), variance decomposition analysis (policy variable explaining more of the exchange rate fluctuation) and out-of-sample forecasting (LER forecasting graph). Further, we do a flip-flop variance decomposition analysis, which leads us to conclude two important phenomena in the Indian economy, (i) weak link between the nominal-policy variable and the real-economic activity (ii) Indian monetary authority had inflation-targeting as one of their primary goals, in tune with the RBI Act. These two main results are robust, holding across different time period, dissimilar monetary aggregates and diverse exogenous model setups.William A. Barnett, Soumya Suvra Bhadury, Taniya Ghosh2015-06Monetary Policy; Monetary Aggregates; Divisia; Structural VAR; Exchange Rate Overshooting; Liquidity Puzzle; Price Puzzle; Exchange Rate Puzzle; Forward Discount Bias PuzzleRevisiting the transitional dynamics of business-cycle phases with mixed frequency data
http://d.repec.org/n?u=RePEc:dau:papers:123456789/15246&r=for
This paper introduces a Markov-Switching model where transition probabilities depend on higher frequency indicators and their lags, through polynomial weighting schemes. The MSV-MIDAS model is estimated via maximum likel ihood methods. The estimation relies on a slightly modified version of Hamilton’s recursive filter. We use Monte Carlo simulations to assess the robustness of the estimation procedure and related test-statistics. The results show that ML provides accurate estimates, but they suggest some caution in the tests on the parameters involved in the transition probabilities. We apply this new model to the detection and forecast of business cycle turning points. We properly detect recessions in United States and United Kingdom by exploiting the link between GDP growth and higher frequency variables from financial and energy markets. Spread term is a particularly useful indicator to predict recessions in the United States, while stock returns have the strongest explanatory power around British turning points.Bessec, Marie2015-06Markov-Switching; mixed frequency data; business cycles;A DARE for VaR
http://d.repec.org/n?u=RePEc:dau:papers:123456789/15232&r=for
This paper introduces a new class of models for the Value-at-Risk (VaR) and Expected Shortfall (ES), called the Dynamic AutoRegressive Expectiles (DARE) models. Our approach is based on a weighted average of expectile-based VaR and ES models, i.e. the Conditional Autoregressive Expectile (CARE) models introduced by Taylor (2008a) and Kuan et al. (2009). First, we briefly present the main non-parametric, parametric and semi-parametric estimation methods for VaR and ES. Secondly, we detail the DARE approach and show how the expectiles can be used to estimate quantile risk measures. Thirdly, we use various backtesting tests to compare the DARE approach to other traditional methods for computing VaR forecasts on the French stock market. Finally, we evaluate the impact of several conditional weighting functions and determine the optimal weights in order to dynamically select the more relevant global quantile model.Hamidi, Benjamin, Hurlin, Christophe, Kouontchou, Patrick, Maillet, Bertrand2015Expected Shortfall; Value-at-Risk; Expectile; Risk Measures; Backtests;Portfolio optimization using local linear regression ensembles in RapidMiner
http://d.repec.org/n?u=RePEc:arx:papers:1506.08690&r=for
In this paper we implement a Local Linear Regression Ensemble Committee (LOLREC) to predict 1-day-ahead returns of 453 assets form the S&P500. The estimates and the historical returns of the committees are used to compute the weights of the portfolio from the 453 stock. The proposed method outperforms benchmark portfolio selection strategies that optimize the growth rate of the capital. We investigate the effect of algorithm parameter m: the number of selected stocks on achieved average annual yields. Results suggest the algorithm's practical usefulness in everyday trading.Gabor Nagy, Gergo Barta, Tamas Henk2015-06On the importance of the probabilistic model in identifying the most decisive game in a tournament
http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1514&r=for
Identifying the important matches in international football tournaments is of great relevance for a variety of decision makers such as organizers, team coaches and/or media managers. This paper addresses this issue by analyzing the role of the statistical approach used to estimate the outcome of the game on the identification of decisive matches on international tournaments for national football teams. We extend the measure of decisiveness proposed by Geenens (2014) in order to allow us to predict or evaluate match importance before, during and after of a particular game on the tournament. Using information from the 2014 FIFA World Cup, our results suggest that Poisson and Kernel regressions significantly outperform the forecasts of ordered probit models. Moreover, we find that the identification of the key, but not most important, matches depends on the model considered. We also apply this methodology to identify the favorite teams and to predict the most important matches in 2015 Copa America before the start of the competition.Francisco Corona, Juan de Dios Tena, Michael P. Wiper2015-06Game importance , Ordered probit model , Entropy , Poisson model , Kernel regression