nep-for New Economics Papers
on Forecasting
Issue of 2014‒12‒19
eleven papers chosen by
Rob J Hyndman
Monash University

  1. A Model Validation Procedure By Julia Polak; Maxwell L. King; Xibin Zhang
  2. Nowcasting and Forecasting the Monthly Food Stamps Data in the US using Online Search Data By Fantazziini, Dean
  3. Modeling Dependence Structure and Forecasting Portfolio Value-at-Risk with Dynamic Copulas By Mario Cerrato; John Crosby; Minjoo Kim; Yang Zhao
  4. Forecast Accuracy Along Booking Profile in the National Railways of an Emerging Asian Economy: Comparison of Different Techniques By Dutta, Goutam; Pachisia, Divya
  5. Modeling and Forecasting Volatility – How Reliable are modern day approaches? By Mehta, Anirudh; Kanishka, Kunal
  6. Understanding Uncertainty Shocks and the Role of Black Swans By Anna Orlik; Laura Veldkamp
  7. EFFICIENCY GAINS IN COTTON PRICE FORECASTING USING DIFFERENT LEVELS OF DATA AGGREGATION By PENA LEVANO, LUIS; Ramirez, Octavio A.
  8. Obtaining superior wind power predictions from a periodic and heteroscedastic Wind Power Prediction Tool By Ambach, Daniel; Croonenbroeck, Carsten
  9. Forecasting Global Equity Indices using Large Bayesian VARs By Florian Huber; Tamas Krisztin; Philipp Piribauer
  10. Variable Selection in Predictive MIDAS Models By C. Marsilli
  11. Growth Expectations, Dividend Yields, and Future Stock Returns By Zhi Da; Ravi Jagannathan; Jianfeng Shen

  1. By: Julia Polak; Maxwell L. King; Xibin Zhang
    Abstract: Statistical models can play a crucial role in decision making. Traditional model validation tests typically make restrictive parametric assumptions about the model under the null and the alternative hypotheses. The majority of these tests examine one type of change at a time. This paper presents a method for determining whether new data continues to support the chosen model. We suggest using simulation and the kernel density estimator instead of assuming a parametric distribution for the data under the hull hypothesis. This leads to a more versatile testing procedure, one that can be applied to test different types of models and look for a variety of different types of divergences from the null hypothesis. Such a flexible testing procedure, in some cases, can also replace a range of tests that each test against particular alternative hypotheses. The procedure’s ability to recognize a change in the underlying model is demonstrated through AR(1) and linear models. We examine the power of our procedure to detect changes in the variance of the error term and the AR coefficient in the AR(1) model. In the linear model, we examine the performance of the procedure when there are changes in the error variance and error distribution, and when an economic cycle is introduced into the model. We find that the procedure has correct empirical size and high power to recognize the changes in the data generating process after 10 to 15 new observations, depending on the type and extent of the change.
    Keywords: Chow test, model validation, p-value, multivariate kernel density estimation, structural break
    JEL: C12 C14 C52 C53
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2014-21&r=for
  2. By: Fantazziini, Dean
    Abstract: We propose the use of Google online search data for nowcasting and forecasting the number of food stamps recipients. We perform a large out-of-sample forecasting exercise with almost 3000 competing models with forecast horizons up to 2 years ahead, and we show that models including Google search data statistically outperform the competing models at all considered horizons. These results hold also with several robustness checks, considering alternative keywords, a falsification test, different out-of-samples, directional accuracy and forecasts at the state-level.
    Keywords: Food Stamps, Supplemental Nutrition Assistance Program, Google, Forecasting, Global Financial Crisis, Great Recession.
    JEL: C22 C53 E27 H53 I32 Q18 R23
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:59696&r=for
  3. By: Mario Cerrato; John Crosby; Minjoo Kim; Yang Zhao
    Abstract: We study the asymmetric and dynamic dependence between financial assets and demonstrate, from the perspective of risk management, the economic significance of dynamic copula models. First, we construct stock and currency portfolios sorted on different characteristics (ex ante beta, coskewness, cokurtosis and order flows), and find substantial evidence of dynamic evolution between the high beta (respectively, coskewness, cokurtosis and order flow) portfolios and the low beta (coskewness, cokur- tosis and order flow) portfolios. Second, using three different dependence measures, we show the presence of asymmetric dependence between these characteristic-sorted portfolios. Third, we use a dynamic copula framework based on Creal et al. (2013) and Patton (2012) to forecast the portfolio Value-at-Risk of long-short (high minus low) equity and FX portfolios. We use several widely used univariate and multivariate VaR models for the purpose of comparison. Backtesting our methodology, we find that the asymmetric dynamic copula models provide more accurate forecasts, in general, and, in particular, perform much better during the recent financial crises, indicating the economic significance of incorporating dynamic and asymmetric dependence in risk management.
    Keywords: asymmetric dependence, dynamic copulas, tail risk, Value-at-Risk forecasting.
    JEL: C32 C53 G17 G32
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:gla:glaewp:2014_17&r=for
  4. By: Dutta, Goutam; Pachisia, Divya
    Abstract: The National Railways of an Emerging Asian Economy (NREAE), the second largest railway network in the world, is facing growing challenges from low fare airlines. To combat these challenges, NREAE has to adopt revenue management systems where efficient forecasting plays a crucial role. In this paper, we make an attempt to compare various forecasting techniques to predict railway bookings for the final day of departure. We use NREAE data of 2005-2008 for a particular railway route, apply time series [moving average, exponential smoothing, and Auto Regressive Integrative Moving Average (ARIMA), linear regression, and revenue management techniques (additive, incremental, and multiplicative pickup] to it and compare various methods. To make an efficient forecast over a booking horizon, we employ a weighted forecasting method (a blend of time series and revenue management forecasts) and find that it is successful in producing average Mean Absolute Percentage Error (MAPE) less than 10% for all fare classes across all days of the week except one class. The advantage of the model is that it produces efficient forecasts by attaching different weights across the booking period.
    URL: http://d.repec.org/n?u=RePEc:iim:iimawp:12916&r=for
  5. By: Mehta, Anirudh; Kanishka, Kunal
    Abstract: This study explores the volatility models and evaluates the quality of one-step ahead forecasts of volatility constructed by (1) GARCH, (2) TGARCH, (3) Risk metrics and (4) Historical volatility. Volatility forecasts suggest that TGARCH performs relatively best in term of MSPE, followed by GARCH, Risk metrics and historical volatility. In terms of VaR, we test for correct unconditional coverage and index- Dependence of violations using Likelihood Ratio tests. The tests suggest that VaR forecasts at 90 % and 95% have desirable properties. Regarding 99% VaR forecasts, We find significant evidence that suggests none of the models can reliably predict at this confidence level.
    Keywords: Asset pricing, Volatility Forecasting, GARCH, T-GARCH, Risk metrics, LR ratio, VaR
    JEL: C10 C12 C15 C19 C51 C53 C58
    Date: 2014–11–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:59788&r=for
  6. By: Anna Orlik; Laura Veldkamp
    Abstract: A fruitful emerging literature reveals that shocks to uncertainty can explain asset returns, business cycles and financial crises. The literature equates uncertainty shocks with changes in the variance of an innovation whose distribution is common knowledge. But how do such shocks arise? This paper argues that people do not know the true distribution of macroeconomic outcomes. Like Bayesian econometricians, they estimate a distribution. Using real-time GDP data, we measure uncertainty as the conditional standard deviation of GDP growth, which captures uncertainty about the distributions estimated parameters. When the forecasting model admits only normally-distributed outcomes, we find small, acyclical changes in uncertainty. But when agents can also estimate parameters that regulate skewness, uncertainty fluctuations become large and counter-cyclical. The reason is that small changes in estimated skewness whip around probabilities of unobserved tail events (black swans). The resulting forecasts resemble those of professional forecasters. Our uncertainty estimates reveal that revisions in parameter estimates, especially those that affect the risk of a black swan, explain most of the shocks to uncertainty.
    JEL: C53 E17 E44 G01 G14
    Date: 2014–08
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:20445&r=for
  7. By: PENA LEVANO, LUIS; Ramirez, Octavio A.
    Keywords: Agribusiness, Agricultural Finance, Research Methods/ Statistical Methods,
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:ags:aaea14:169814&r=for
  8. By: Ambach, Daniel; Croonenbroeck, Carsten
    Abstract: The Wind Power Prediction Tool (WPPT) has successfully been used for accurate wind power forecasts in the short to medium term scenario (up to 12 hours ahead). Since its development about a decade ago, a lot of additional stochastic modeling has been applied to the interdependency of wind power and wind speed. We improve the model in three ways: First, we replace the rather simple Fourier series of the basic model by more general and flexible periodic Basis splines (Bsplines). Second, we model conditional heteroscedasticity by a threshold-GARCH (TGARCH) model, one aspect that is entirely left out by the underlying model. Third, we evaluate several distributional forms of the model's error term. While the original WPPT assumes gaussian errors only, we also investigate whether the errors may follow a Student's t-distribution as well as a skew t-distribution. In this article we show that our periodic WPPT-CH model is able to improve forecasts' accuracy significantly, when compared to the plain WPPT model.
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:zbw:euvwdp:361&r=for
  9. By: Florian Huber (Department of Economics, Vienna University of Economics and Business); Tamas Krisztin (Department of Socio-Economics, Vienna University of Economics and Business); Philipp Piribauer (Department of Socio-Economics, Vienna University of Economics and Business)
    Abstract: This paper proposes a large Bayesian Vector Autoregressive (BVAR) model with common stochastic volatility to forecast global equity indices. Using a dataset consisting of monthly data on global stock indices the BVAR model inherently incorporates co-movements in the stock markets. The time-varying specification of the covariance structure moreover accounts for sudden shifts in the level of volatility. In an out-of-sample forecasting application we show that the BVAR model with stochastic volatility significantly outperforms the random walk both in terms of root mean squared errors as well as Bayesian log predictive scores. The BVAR model without stochastic volatility, on the other hand, underperforms relative to the random walk. In a portfolio allocation exercise we moreover show that it is possible to use the forecasts obtained from our BVAR model with common stochastic volatility to set up simple investment strategies. Our results indicate that these simple investment schemes outperform a naive buy-and-hold strategy.
    Keywords: BVAR, stochastic volatility, log-scores, equity indices, forecasting
    JEL: C11 C22 C53 E17 G11
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwwuw:wuwp184&r=for
  10. By: C. Marsilli
    Abstract: In short-term forecasting, it is essential to take into account all available information on the current state of the economic activity. Yet, the fact that various time series are sampled at different frequencies prevents an efficient use of available data. In this respect, the Mixed-Data Sampling (MIDAS) model has proved to outperform existing tools by combining data series of different frequencies. However, major issues remain regarding the choice of explanatory variables. The paper first addresses this point by developing MIDAS based dimension reduction techniques and by introducing two novel approaches based on either a method of penalized variable selection or Bayesian stochastic search variable selection. These features integrate a cross-validation procedure that allows automatic in-sample selection based on recent forecasting performances. Then the developed techniques are assessed with regards to their forecasting power of US economic growth during the period 2000-2013 using jointly daily and monthly data. Our model succeeds in identifying leading indicators and constructing an objective variable selection with broad applicability.
    Keywords: Forecasting, Mixed frequency data, MIDAS, Variable selection, GDP.
    JEL: C53 E37
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:bfr:banfra:520&r=for
  11. By: Zhi Da; Ravi Jagannathan; Jianfeng Shen
    Abstract: According to the dynamic version of the Gordon growth model, the long-run expected return on stocks, stock yield, is the sum of the dividend yield on stocks plus some weighted average of expected future growth rates in dividends. We construct a measure of stock yield based on sell-side analysts' near-term earnings forecasts that predicts US stock index returns well, with an out-of-sample R-squared that is consistently above 2% at monthly frequency over our sample period. Stock yield also predicts future stock index returns in the US and other G7 countries and returns of US stock portfolios formed by sorting stocks based on firm characteristics, at various horizons. The findings are consistent with a single dominant factor driving expected returns on stocks over different holding periods. That single factor extracted from the cross section of stock yields using the Kelly and Pruitt (2013) partial regressions method predicts stock index returns better. The performance of the Binsbergen and Koijen (2010) latent factor model for forecasting stock returns improves significantly when stock yield is included as an imperfect observation of expected return on stocks. Consistent with folk wisdom, stock returns are more predictable coming out of a recession. Our measure performs as well in predicting stock returns as the implied cost of capital, another common stock yield measure that uses additional information.
    JEL: G0 G1 G10 G11 G12 G17
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:20651&r=for

This nep-for issue is ©2014 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.