nep-for New Economics Papers
on Forecasting
Issue of 2018‒05‒28
five papers chosen by
Rob J Hyndman
Monash University

  1. Is there room for another hedonic model? –The advantages of the GAMLSS approach in real estate research. By Marcelo Cajias
  2. Predictive regressions under asymmetric loss: factor augmentation and model selection By Demetrescu, Matei; Hacioglu Hoke, Sinem
  3. Intertopic Distances as Leading Indicators By Melody Y. Huang; Randall R. Rojas; Patrick D. Convery
  4. Density Forecasts in Panel Data Models: A Semiparametric Bayesian Perspective By Laura Liu
  5. Aggregating multiple types of complex data in stock market prediction: A model-independent framework By Huiwen Wang; Shan Lu; Jichang Zhao

  1. By: Marcelo Cajias
    Abstract: Hedonic modelling is essential for institutional investors, researchers and urban policy-makers in order to identify the factors affecting the value and future development of rents over time and space. While statistical models in this field have advanced substantially over the last decades, new statistical approaches have emerged expanding the conventional understanding of real estate markets. This paper explores the in-sample explanatory and out-of-sample forecasting accuracy of the Generalized Additive Model for Location, Scale and Shape (GAMLSS) model in contrast to traditional methods in Munich’s residential market. The results show that the complexity of asking rents in Munich is more accurately captured by the GAMLSS approach, leading to a significant increase in the out-of-sample forecasting accuracy.
    Keywords: GAM; GAMLSS; Hedonic Modelling; Out-of-sample bootstrap; Residential Housing
    JEL: R3
    Date: 2017–07–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2017_226&r=for
  2. By: Demetrescu, Matei (Institute for Statistics and Econometrics); Hacioglu Hoke, Sinem (Bank of England)
    Abstract: The paper discusses the specifics of forecasting with factor-augmented predictive regressions under general loss functions. In line with the literature, we employ principal component analysis to extract factors from the set of predictors. We additionally extract information on the volatility of the series to be predicted, since volatility is forecast-relevant under non-quadratic loss functions. To ensure asymptotic unbiasedness of forecasts under the relevant loss, we estimate the predictive regression by minimizing the in-sample average loss. Finally, to select the most promising predictors for the series to be forecast, we employ an information criterion tailored to the relevant loss. Using a large monthly data set for the US economy, we assess the proposed adjustments in a pseudo out-of-sample forecasting exercise for various variables. As expected, the use of estimation under the relevant loss is effective. Using an additional volatility proxy as predictor and conducting model selection tailored to the relevant loss function enhances forecast performance significantly.
    Keywords: Predictive regressions; many predictors; cost-of-error function; latent variables; volatility
    JEL: C53
    Date: 2018–05–11
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0723&r=for
  3. By: Melody Y. Huang; Randall R. Rojas; Patrick D. Convery
    Abstract: We use a topic modeling algorithm and sentiment scoring methods to construct a novel metric to use as a leading indicator in recession prediction models. We hypothesize that due to non-instantaneous information flows, the inclusion of such a sentiment indicator derived purely from unstructured news data will improve our capabilities to forecast future recessions. We go on to show that the use of this proposed metric, even when included with consumer survey data, helps improve model performance significantly. This metric, in combination with consumer survey data, S&P 500 returns, and the yield curve, produces forecasts that significantly outperform models of higher complexity, containing traditional economic indicators.
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1805.04160&r=for
  4. By: Laura Liu
    Abstract: This paper constructs individual-specific density forecasts for a panel of firms or households using a dynamic linear model with common and heterogeneous coefficients and cross-sectional heteroskedasticity. The panel considered in this paper features a large cross-sectional dimension N but short time series T. Due to the short T, traditional methods have difficulty in disentangling the heterogeneous parameters from the shocks, which contaminates the estimates of the heterogeneous parameters. To tackle this problem, I assume that there is an underlying distribution of heterogeneous parameters, model this distribution nonparametrically allowing for correlation between heterogeneous parameters and initial conditions as well as individual-specific regressors, and then estimate this distribution by pooling the information from the whole cross-section together. Theoretically, I prove that both the estimated common parameters and the estimated distribution of the heterogeneous parameters achieve posterior consistency, and that the density forecasts asymptotically converge to the oracle forecast. Methodologically, I develop a simulation-based posterior sampling algorithm specifically addressing the nonparametric density estimation of unobserved heterogeneous parameters. Monte Carlo simulations and an application to young firm dynamics demonstrate improvements in density forecasts relative to alternative approaches.
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1805.04178&r=for
  5. By: Huiwen Wang; Shan Lu; Jichang Zhao
    Abstract: The increasing richness in volume, and especially types of data in the financial domain provides unprecedented opportunities to understand the stock market more comprehensively and makes the price prediction more accurate than before. However, they also bring challenges to classic statistic approaches since those models might be constrained to a certain type of data. Aiming at aggregating differently sourced information and offering type-free capability to existing models, a framework for predicting stock market of scenarios with mixed data, including scalar data, compositional data (pie-like) and functional data (curve-like), is established. The presented framework is model-independent, as it serves like an interface to multiple types of data and can be combined with various prediction models. And it is proved to be effective through numerical simulations. Regarding to price prediction, we incorporate the trading volume (scalar data), intraday return series (functional data), and investors' emotions from social media (compositional data) through the framework to competently forecast whether the market goes up or down at opening in the next day. The strong explanatory power of the framework is further demonstrated. Specifically, it is found that the intraday returns impact the following opening prices differently between bearish market and bullish market. And it is not at the beginning of the bearish market but the subsequent period in which the investors' "fear" comes to be indicative. The framework would help extend existing prediction models easily to scenarios with multiple types of data and shed light on a more systemic understanding of the stock market.
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1805.05617&r=for

This nep-for issue is ©2018 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.