nep-for New Economics Papers
on Forecasting
Issue of 2019‒04‒22
ten papers chosen by
Rob J Hyndman
Monash University

  1. Improving Forecast Accuracy of Financial Vulnerability: PLS Factor Model Approach By Hyeongwoo Kim; Kyunghwan Ko
  2. The Contribution of Jump Signs and Activity to Forecasting Stock Price Volatility By Hizmeri, Rodrigo; Izzeldin, Marwan; Murphy, Anthony; Tsionas, Mike G.
  3. Exploiting ergodicity in forecasts of corporate profitability By Mundt, Philipp; Alfarano, Simone; Milaković, Mishael
  4. Universal features of price formation in financial markets: perspectives from Deep Learning By Justin Sirignano; Rama Cont
  5. Bayesian Risk Forecasting for Long Horizons By Agnieszka Borowska; Lennart Hoogerheide; Siem Jan Koopman
  6. Forecasting Bordeaux wine prices using state-space methods By Stephen Bazen; Jean-Marie Cardebat
  7. Stock Forecasting using M-Band Wavelet-Based SVR and RNN-LSTMs Models By Hieu Quang Nguyen; Abdul Hasib Rahimyar; Xiaodi Wang
  8. Can Google Search Data Help Predict Macroeconomic Series? By Robin Niesert; Jochem Oorschot; Chris Veldhuisen; Kester Brons; Rutger-Jan Lange
  9. Reliable real-time output gap estimates based on a modified Hamilton filter By Quast, Josefine; Wolters, Maik H.
  10. A Dynamic Bayesian Model for Interpretable Decompositions of Market Behaviour By Th\'eophile Griveau-Billion; Ben Calderhead

  1. By: Hyeongwoo Kim; Kyunghwan Ko
    Abstract: We present a factor augmented forecasting model for assessing the financial vulnerability in Korea. Dynamic factor models often extract latent common factors from a large panel of time series data via the method of the principal components (PC). Instead, we employ the partial least squares (PLS) method that estimates target specific common factors, utilizing covariances between predictors and the target variable. Applying PLS to 198 monthly frequency macroeconomic time series variables and the Bank of Korea's Financial Stress Index (KFSTI), our PLS factor augmented forecasting models consistently outperformed the random walk benchmark model in out-of-sample prediction exercises in all forecast horizons we considered. Our models also outperformed the autoregressive benchmark model in short-term forecast horizons. We expect our models would provide useful early warning signs of the emergence of systemic risks in Korea's financial markets.
    Keywords: Partial Least Squares; Principal Component Analysis; Financial Stress Index; Out-of-Sample Forecast; RRMSPE
    JEL: C38 C53 E44 E47 G01 G17
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:abn:wpaper:auwp2019-03&r=all
  2. By: Hizmeri, Rodrigo (Lancaster University); Izzeldin, Marwan (Lancaster University); Murphy, Anthony (Federal Reserve Bank of Dallas); Tsionas, Mike G. (Lancaster University)
    Abstract: We document the forecasting gains achieved by incorporating measures of signed, finite and infinite jumps in forecasting the volatility of equity prices, using high-frequency data from 2000 to 2016. We consider the SPY and 20 stocks that vary by sector, volume and degree of jump activity. We use extended HAR-RV models, and consider different frequencies (5, 60 and 300 seconds), forecast horizons (1, 5, 22 and 66 days) and the use of standard and robust-to-noise volatility and threshold bipower variation measures. Incorporating signed finite and infinite jumps generates significantly better real-time forecasts than the HAR-RV model, although no single extended model dominates. In general, standard volatility measures at the 300-second frequency generate the smallest real-time mean squared forecast errors. Finally, the forecasts from simple model averages generally outperform forecasts from the single best model.
    Keywords: Realized Volatility; Signed Jumps; Finite Jumps; Infinite Jumps; Volatility Forecasts; Noise-Robust Volatility; Model Averaging
    JEL: C22 C51 C53 C58
    Date: 2019–03–28
    URL: http://d.repec.org/n?u=RePEc:fip:feddwp:1902&r=all
  3. By: Mundt, Philipp; Alfarano, Simone; Milaković, Mishael
    Abstract: Theory suggests that competition tends to equalize profit rates through the process of capital reallocation, and numerous studies have confirmed that profit rates are indeed persistent and mean-reverting. Recent empirical evidence further shows that fluctuations in the profitability of surviving corporations are well approximated by a stationary Laplace distribution. Here we show that a parsimonious diffusion process of corporate profitability that accounts for all three features of the data achieves better out-of-sample forecasting performance across different time horizons than previously suggested time series and panel data models. As a consequence of replicating the empirical distribution of profit rate fluctuations, the model prescribes a particular strength or speed for the mean-reversion of all profit rates, which leads to superior forecasts of individual time series when we exploit information from the cross-sectional collection of firms. The new model should appeal to managers, analysts, investors and other groups of corporate stakeholders who are interested in accurate forecasts of profitability. To the extent that mean-reversion in profitability is the source of predictable variation in earnings, our approach can also be used in forecasts of earnings and is thus useful for firm valuation.
    Keywords: return on assets,stochastic differential equation,Fokker-Planck equation,superior predictive ability test,model confidence set
    JEL: C21 C22 C53 L10 D22
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:bamber:147&r=all
  4. By: Justin Sirignano (UIUC - University of Illinois at Urbana Champaign - University of Illinois at Urbana-Champaign [Urbana]); Rama Cont (LPSM UMR 8001 - Laboratoire de Probabilités, Statistique et Modélisation - UPMC - Université Pierre et Marie Curie - Paris 6 - UPD7 - Université Paris Diderot - Paris 7 - CNRS - Centre National de la Recherche Scientifique)
    Abstract: Using a large-scale Deep Learning approach applied to a high-frequency database containing billions of electronic market quotes and transactions for US equities, we uncover nonparametric evidence for the existence of a universal and stationary price formation mechanism relating the dynamics of supply and demand for a stock, as revealed through the order book, to subsequent variations in its market price. We assess the model by testing its out-of-sample predictions for the direction of price moves given the history of price and order flow, across a wide range of stocks and time periods. The universal price formation model exhibits a remarkably stable out-of-sample prediction accuracy across time, for a wide range of stocks from different sectors. Interestingly, these results also hold for stocks which are not part of the training sample, showing that the relations captured by the model are universal and not asset-specific. The universal model — trained on data from all stocks — outperforms, in terms of out-of-sample prediction accuracy, asset-specific linear and nonlinear models trained on time series of any given stock, showing that the universal nature of price formation weighs in favour of pooling together financial data from various stocks, rather than designing asset-or sector-specific models as commonly done. Standard data normal-izations based on volatility, price level or average spread, or partitioning the training data into sectors or categories such as large/small tick stocks, do not improve training results. On the other hand, inclusion of price and order flow history over many past observations improves forecasting performance, showing evidence of path-dependence in price dynamics.
    Date: 2018–03–30
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01754054&r=all
  5. By: Agnieszka Borowska (VU Amsterdam); Lennart Hoogerheide (VU Amsterdam); Siem Jan Koopman (VU Amsterdam)
    Abstract: We present an accurate and efficient method for Bayesian forecasting of two financial risk measures, Value-at-Risk and Expected Shortfall, for a given volatility model. We obtain precise forecasts of the tail of the distribution of returns not only for the 10-days-ahead horizon required by the Basel Committee but even for long horizons, like one-month or one-year-ahead. The latter has recently attracted considerable attention due to the different properties of short term risk and long run risk. The key insight behind our importance sampling based approach is the sequential construction of marginal and conditional importance densities for consecutive periods. We report substantial accuracy gains for all the considered horizons in empirical studies on two datasets of daily financial returns, including a highly volatile period of the recent financial crisis. To illustrate the flexibility of the proposed construction method, we present how it can be adjusted to the frequentist case, for which we provide counterparts of both Bayesian applications.
    Keywords: Bayesian inference, forecasting, importance sampling, numerical accuracy, long run risk, Value-at-Risk, Expected Shortfall
    JEL: C32
    Date: 2019–02–22
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20190018&r=all
  6. By: Stephen Bazen (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - ECM - Ecole Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique - AMU - Aix Marseille Université - EHESS - École des hautes études en sciences sociales, AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - ECM - Ecole Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique); Jean-Marie Cardebat (Larefi - Laboratoire d'analyse et de recherche en économie et finance internationales - Université Montesquieu - Bordeaux 4)
    Abstract: Generic Bordeaux red wine (basic claret) can be regarded as being similar to an agricultural commodity. Production volumes are substantial, they are traded at high frequency and the quality of the product is relatively homogeneous. Unlike other commodities and the top-end wines (which represent only 3% of the traded volume), there is no futures market for generic Bordeaux wine. Reliable forecasts of prices can to large extent replace this information deficiency and improve the functioning of the market. We use state-space methods with monthly data to obtain a univariate forecasting model for the average price. The estimates highlight the stochastic trend and the seasonality present in the evolution of the price over the period 1999 to 2016. The model predicts the path of wine prices out of sample reasonably well, suggesting that this approach is useful for making reasonably accurate forecasts of future price movements.
    Keywords: forecasting,Wine prices,state-space methods,forecasting JEL CLASSIFICATION C53,L66,Q11
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-01867216&r=all
  7. By: Hieu Quang Nguyen; Abdul Hasib Rahimyar; Xiaodi Wang
    Abstract: The task of predicting future stock values has always been one that is heavily desired albeit very difficult. This difficulty arises from stocks with non-stationary behavior, and without any explicit form. Hence, predictions are best made through analysis of financial stock data. To handle big data sets, current convention involves the use of the Moving Average. However, by utilizing the Wavelet Transform in place of the Moving Average to denoise stock signals, financial data can be smoothened and more accurately broken down. This newly transformed, denoised, and more stable stock data can be followed up by non-parametric statistical methods, such as Support Vector Regression (SVR) and Recurrent Neural Network (RNN) based Long Short-Term Memory (LSTM) networks to predict future stock prices. Through the implementation of these methods, one is left with a more accurate stock forecast, and in turn, increased profits.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.08459&r=all
  8. By: Robin Niesert; Jochem Oorschot (Econometric Institute, Erasmus University); Chris Veldhuisen; Kester Brons; Rutger-Jan Lange (Econometric Institute, Erasmus University)
    Abstract: We use Google search data with the aim of predicting unemployment, CPI and consumer confidence for the US, UK, Canada, Germany and Japan. Google search queries have previously proven valuable in predicting macroeconomic variables in an in-sample context. To our knowledge, the more challenging question of whether such data have out-of-sample predictive value has not yet been satisfactorily answered. We focus on out-of-sample nowcasting, and extend the Bayesian Structural Time Series model using the Hamiltonian sampler for variable selection. We find that the search data retain their value in an out- of-sample predictive context for unemployment, but not for CPI and consumer confidence. It may be that online search behaviour is a relatively reliable gauge of an individual's personal situation (employment status), but less reliable when it comes to variables that are unknown to the individual (CPI) or too general to be linked to specific search terms (consumer confidence).
    Keywords: Bayesian methods, forecasting practice, Kalman filter, macroeconomic forecasting, state space models, nowcasting, spike-and-slab, Hamiltonian sampler
    JEL: C11 C53
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20190021&r=all
  9. By: Quast, Josefine; Wolters, Maik H.
    Abstract: The authors contribute to the debate regarding the reliability of output gap estimates. As an alternative to the Hodrick-Prescott (HP) filter, they propose a simple modification of the filter proposed by Hamilton in 2018 that shares its favorable real-time properties, but leads to a more even coverage of typical business cycle frequencies. Based on output growth and inflation forecasts and a comparison to revised output gap estimates from policy institutions, they find that real-time output gaps based on the modified Hamilton filter are economically much more meaningful measures of the business cycle than those based on other simple statistical trend-cycle decomposition techniques such as the HP or the Bandpass filter.
    Keywords: output gap,potential output,trend-cycle decomposition,Hamilton filter,real-time data,inflation forecasting
    JEL: C18 E32 E37
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:imfswp:133&r=all
  10. By: Th\'eophile Griveau-Billion; Ben Calderhead
    Abstract: We propose a heterogeneous simultaneous graphical dynamic linear model (H-SGDLM), which extends the standard SGDLM framework to incorporate a heterogeneous autoregressive realised volatility (HAR-RV) model. This novel approach creates a GPU-scalable multivariate volatility estimator, which decomposes multiple time series into economically-meaningful variables to explain the endogenous and exogenous factors driving the underlying variability. This unique decomposition goes beyond the classic one step ahead prediction; indeed, we investigate inferences up to one month into the future using stocks, FX futures and ETF futures, demonstrating its superior performance according to accuracy of large moves, longer-term prediction and consistency over time.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.08153&r=all

This nep-for issue is ©2019 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.