nep-for New Economics Papers
on Forecasting
Issue of 2018‒04‒09
eleven papers chosen by
Rob J Hyndman
Monash University

  1. New Perspectives on Forecasting Inflation in Emerging Market Economies: An Empirical Assessment By Duncan, Roberto; Martinez-Garcia, Enrique
  2. Is There a Role for Uncertainty in Forecasting Output Growth in OECD Countries? Evidence from a Time Varying Parameter-Panel Vector Autoregressive Model By Goodness C. Aye; Rangan Gupta; Chi Keung Marco Lau; Xin Sheng
  3. Forecasting stock market returns by summing the frequency-decomposed parts By Gonçalo Faria; Fabio Verona
  4. Effects of different ways of incentivizing price forecasts on market dynamics and individual decisions in asset market experiments * By Nobuyuki Hanaki; Eizo Akiyama; Ryuichiro Ishikawa
  5. Evaluating Conditional Cash Transfer Policies with Machine Learning Methods By Tzai-Shuen Chen
  6. Universal features of price formation in financial markets: perspectives from Deep Learning By Justin Sirignano; Rama Cont
  7. Forecasting Deflation Probability in the EA: A Combinatoric Approach By Luca Brugnolini
  8. Forecasting Cryptocurrencies Financial Time Series By Leopoldo Catania; Stefano Grassi; Francesco Ravazzolo
  9. Estimation and forecasting in INAR(p) models using sieve bootstrap By Luisa Bisaglia; Margherita Gerolimetto
  10. Real-time forecasting with macro-finance models in the presence of a zero lower bound By Leo Krippner; Michelle Lewis
  11. Anomaly detection in streaming nonstationary temporal data By Priyanga Dilini Talagala; Rob J Hyndman; Kate Smith-Miles; Sevvandi Kandanaarachchi; Mario A Munoz

  1. By: Duncan, Roberto (Ohio University); Martinez-Garcia, Enrique (Federal Reserve Bank of Dallas)
    Abstract: We use a broad-range set of inflation models and pseudo out-of-sample forecasts to assess their predictive ability among 14 emerging market economies (EMEs) at different horizons (1 to 12 quarters ahead) with quarterly data over the period 1980Q1-2016Q4. We find, in general, that a simple arithmetic average of the current and three previous observations (the RW-AO model) consistently outperforms its standard competitors - based on the root mean squared prediction error (RMSPE) and on the accuracy in predicting the direction of change. These include conventional models based on domestic factors, existing open-economy Phillips curve-based specifications, factor-augmented models, and time-varying parameter models. Often, the RMSPE and directional accuracy gains of the RW-AO model are shown to be statistically significant. Our results are robust to forecast combinations, intercept corrections, alternative transformations of the target variable, different lag structures, and additional tests of (conditional) predictability. We argue that the RW-AO model is successful among EMEs because it is a straightforward method to downweight later data, which is a useful strategy when there are unknown structural breaks and model misspecification.
    JEL: E31 F41 F42 F47
    Date: 2018–01–01
    URL: http://d.repec.org/n?u=RePEc:fip:feddgw:338&r=for
  2. By: Goodness C. Aye (Department of Economics, University of Pretoria, Pretoria, South Africa.); Rangan Gupta (Department of Economics, University of Pretoria, Pretoria, South Africa); Chi Keung Marco Lau (Huddersfield Business School, University of Huddersfield, Huddersfield, UK); Xin Sheng (Huddersfield Business School, University of Huddersfield, Huddersfield, UK)
    Abstract: This paper uses a time varying parameter-panel vector autoregressive (TVP-PVAR) model to analyze the role played by domestic and US news-based measures of uncertainty in forecasting the growth of industrial production of twelve Organisation for Economic Co-operation and Development (OECD) countries. Based on a monthly out-of-sample period of 2009:06 to 2017:05, given an in-sample of 2003:03 to 2009:05, there are only 46 percent of cases where domestic uncertainty can improve the forecast of output growth relative to a baseline monetary TVP-PVAR model, which includes inflation, interest rate and nominal exchange rate growth, besides output growth. Moreover, including US uncertainty does not necessarily improve the forecasting performance of output growth from the TVP-PVAR model which includes only the domestic uncertainty along with the baseline variables. So, in general, while uncertainty is important in predicting the future path of output growth in the twelve advanced economies considered, a forecaster can do better in majority of the instances by just considering the information from standard macroeconomic variables.
    Keywords: Economic Uncertainty, Output Growth, Time Varying Parameter, Panel Vector Autoregressions, OECD Countries
    JEL: C33 C53 E32 E37 E60
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:201823&r=for
  3. By: Gonçalo Faria (Universidade Católica Portuguesa, Católica Porto Business School and CEGE, and University of Vigo, RGEA); Fabio Verona (Bank of Finland, Monetary Policy and Research Department, and University of Porto, cef.up)
    Abstract: We generalize the Ferreira and Santa-Clara (2011) sum-of-the-parts method for forecasting stock market returns. Rather than summing the parts of stock returns, we suggest summing some of the frequency-decomposed parts. The proposed method significantly improves upon the original sum-of-the-parts and delivers statistically and economically gains over historical mean forecasts, with monthly out-of-sample R2 of 2.60% and annual utility gains of 558 basis points. The strong performance of this method comes from its ability to isolate the frequencies of the parts with the highest predictive power, and from the fact that the selected frequency-decomposed parts carry complementary information that captures different frequencies of stock market returns.
    Keywords: predictability, stock returns, equity premium, asset allocation, frequency domain, wavelets
    JEL: G11 G12 G14 G17
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:por:cetedp:1702&r=for
  4. By: Nobuyuki Hanaki (GREDEG - Groupe de Recherche en Droit, Economie et Gestion - UNS - Université Nice Sophia Antipolis - UCA - Université Côte d'Azur - CNRS - Centre National de la Recherche Scientifique - UCA - Université Côte d'Azur); Eizo Akiyama (University of Tsukuba); Ryuichiro Ishikawa (Waseda University)
    Abstract: In this study, we investigate (a) whether eliciting future price forecasts influences market outcomes and (b) whether differences in the way in which subjects are incentivized to submit " accurate " price forecasts influence market outcomes as well as the forecasts in an experimental asset market. We consider four treatments: one without forecast elicitation and three with forecast elicitation. In two of the treatments with forecast elicitation, subjects are paid based on their performance in both forecasting and trading, while in the other treatment with forecast elicitations, they are paid based on only one of those factors, which is chosen randomly at the end of the experiment. We found no significant effect of forecast elicitation on market outcomes in the latter case. Thus, to avoid influencing the behavior of subjects and market outcomes by eliciting price forecasts, paying subjects based on either forecasting or trading performance chosen randomly at the end of the experiment is better than paying them based on both. In addition, we consider forecast-only experiments: one in which subjects are rewarded based on the number of accurate forecasts and the other in which they are rewarded based on a quadratic scoring rule. We found no significant difference in terms of forecasting performance between the two.
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-01712305&r=for
  5. By: Tzai-Shuen Chen
    Abstract: This paper presents an out-of-sample prediction comparison between major machine learning models and the structural econometric model. Over the past decade, machine learning has established itself as a powerful tool in many prediction applications, but this approach is still not widely adopted in empirical economic studies. To evaluate the benefits of this approach, I use the most common machine learning algorithms, CART, C4.5, LASSO, random forest, and adaboost, to construct prediction models for a cash transfer experiment conducted by the Progresa program in Mexico, and I compare the prediction results with those of a previous structural econometric study. Two prediction tasks are performed in this paper: the out-of-sample forecast and the long-term within-sample simulation. For the out-of-sample forecast, both the mean absolute error and the root mean square error of the school attendance rates found by all machine learning models are smaller than those found by the structural model. Random forest and adaboost have the highest accuracy for the individual outcomes of all subgroups. For the long-term within-sample simulation, the structural model has better performance than do all of the machine learning models. The poor within-sample fitness of the machine learning model results from the inaccuracy of the income and pregnancy prediction models. The result shows that the machine learning model performs better than does the structural model when there are many data to learn; however, when the data are limited, the structural model offers a more sensible prediction. The findings of this paper show promise for adopting machine learning in economic policy analyses in the era of big data.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.06401&r=for
  6. By: Justin Sirignano; Rama Cont
    Abstract: Using a large-scale Deep Learning approach applied to a high-frequency database containing billions of electronic market quotes and transactions for US equities, we uncover nonparametric evidence for the existence of a universal and stationary price formation mechanism relating the dynamics of supply and demand for a stock, as revealed through the order book, to subsequent variations in its market price. We assess the model by testing its out-of-sample predictions for the direction of price moves given the history of price and order flow, across a wide range of stocks and time periods. The universal price formation model is shown to exhibit a remarkably stable out-of-sample prediction accuracy across time, for a wide range of stocks from different sectors. Interestingly, these results also hold for stocks which are not part of the training sample, showing that the relations captured by the model are universal and not asset-specific. The universal model --- trained on data from all stocks --- outperforms, in terms of out-of-sample prediction accuracy, asset-specific linear and nonlinear models trained on time series of any given stock, showing that the universal nature of price formation weighs in favour of pooling together financial data from various stocks, rather than designing asset- or sector-specific models as commonly done. Standard data normalizations based on volatility, price level or average spread, or partitioning the training data into sectors or categories such as large/small tick stocks, do not improve training results. On the other hand, inclusion of price and order flow history over many past observations is shown to improve forecasting performance, showing evidence of path-dependence in price dynamics.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1803.06917&r=for
  7. By: Luca Brugnolini (Central Bank of Malta)
    Abstract: This paper assesses and forecasts the probability of deflation in the EA at different horizons using a binomial probit model. The best predictors are selected among more than one-hundred variables adopting a two-step combinatoric approach and exploiting parallel computation in Julia language. The best-selected variables coincide to those standardly included in a small New Keynesian model. Also, the goodness of the models is assessed using three different loss functions: the Mean Absolute Error (MAE), the Root Mean Squared Error (RMSE) and the Area Under the Receiver Operating Characteristics (AUROC). The results are reasonably consistent among the three criteria. Finally, an index averaging the forecasts is computed to assess the probability of being in a deflation state in the next two years. The index shows that having inflation above the 2% level before March 2019 is extremely unlikely.
    JEL: C25 C63 E3 E58
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:mlt:wpaper:0118&r=for
  8. By: Leopoldo Catania; Stefano Grassi; Francesco Ravazzolo
    Abstract: This paper studies the predictability of cryptocurrencies time series. We compare several alternative univariate and multivariate models in point and density forecasting of four of the most capitalized series: Bitcoin, Litecoin, Ripple and Ethereum. We apply a set of crypto–predictors and rely on Dynamic Model Averaging to combine a large set of univariate Dynamic Linear Models and several multivariate Vector Autoregressive models with different forms of time variation. We find statistical significant improvements in point forecasting when using combinations of univariate models and in density forecasting when relying on selection of multivariate models.
    Keywords: Cryptocurrency, Bitcoin, Forecasting, Density Forecasting, VAR, Dynamic Model Averaging
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:bny:wpaper:0063&r=for
  9. By: Luisa Bisaglia (Department of Statistics, University of Padova); Margherita Gerolimetto (Department of Economics, University Of Venice Cà Foscari)
    Abstract: In this paper we analyse some bootstrap techniques to make inference in INAR(p) models. First of all, via Monte Carlo experiments we compare the performances of these methods when estimating the thinning parameters in INAR(p) models. We state the superiority of sieve bootstrap approaches on block bootstrap in terms of low bias and Mean Square Error (MSE). Then we apply the sieve bootstrap methods to obtain coherent predictions and confidence intervals in order to avoid difficulty in deriving the distributional properties.
    Keywords: INAR(p) models, estimation, forecast, bootstrap
    JEL: C22 C53
    URL: http://d.repec.org/n?u=RePEc:ven:wpaper:2018:06&r=for
  10. By: Leo Krippner; Michelle Lewis (Reserve Bank of New Zealand)
    Abstract: We investigate the real-time forecasting performance of macro-finance vector auto-regression models, which incorporate macroeconomic data and yield curve component estimates as would have been available at the time of each forecast, for the United States. Our results show a clear benefit from using yield curve information when forecasting macroeconomic variables, both prior to the Global Financial Crisis and continuing into the period where the lower-bound constrained shorter-maturity interest rates. The forecasting gains, relative to traditional macroeconomic models, for inflation and the Federal Funds Rate are generally statistically significant and economically material for the horizons up to the four years that we tested. However, macro-finance models do not improve the real-time forecasts over shorter horizons for capacity utilisation, our variable representing real economic activity. This is in contrast to the related recent macro-finance literature, which establishes such results (as do we) with pseudo real-time, i.e. truncated final-vintage, data. Nevertheless, for longer horizons that are more relevant for central bankers, yield curve information does improve activity forecasts. Overall, our results suggest that the yield curve contains fundamental information about the likely evolution of the macroeconomy. We find less convincing evidence for the reverse direction, which is likely because expectations of macroeconomic variables are already reflected in the yield curve. However, for longer horizons, we find there are still some gains from using macroeconomic variables to forecast the yield curve.
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:nzb:nzbdps:2018/4&r=for
  11. By: Priyanga Dilini Talagala; Rob J Hyndman; Kate Smith-Miles; Sevvandi Kandanaarachchi; Mario A Munoz
    Abstract: This article proposes a framework that provides early detection of anomalous series within a large collection of non-stationary streaming time series data. We define an anomaly as an observation that is very unlikely given the recent distribution of a given system. The proposed framework first forecasts a boundary for the system's typical behavior using extreme value theory. Then a sliding window is used to test for anomalous series within a newly arrived collection of series. The model uses time series features as inputs, and a density-based comparison to detect any significant changes in the distribution of the features. Using various synthetic and real world datasets, we demonstrate the wide applicability and usefulness of our proposed framework. We show that the proposed algorithm can work well in the presence of noisy non-stationarity data within multiple classes of time series. This framework is implemented in the open source R package oddstream. R code and data are available in the supplementary materials.
    Keywords: concept drift, extreme value theory, feature-based time series analysis, kernel-based density estimation, multivariate time series, outlier detection.
    JEL: C38 C60
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2018-4&r=for

This nep-for issue is ©2018 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.