nep-for New Economics Papers
on Forecasting
Issue of 2014‒08‒02
eleven papers chosen by
Rob J Hyndman
Monash University

  1. Probabilistic load forecasting via Quantile Regression Averaging of independent expert forecasts By Tao Hong; Katarzyna Maciejowska; Jakub Nowotarski; Rafal Weron
  2. Macroeconomic and credit forecasts in a small economy during crisis: A large Bayesian VAR approach By Dimitris P. Louzis
  3. Density forecasts with MIDAS models By Knut Are Aastveit; Claudia Foroni; Francesco Ravazzolo
  4. Sentiment-Based Commercial Real Estate Forecasting with Google Search Volume Data By Dietzel, Marian Alexander; Braun, Nicole; Schäfers, Wolfgang
  5. Property Market Modelling and Forecasting: A Case for Simplicity By Jadevicius, Arvydas; Sloan, Brian; Brown, Andrew
  6. Rationality, Bias and Accuracy in Housing Start Forecasts By Papastamos, Dimitrios; Stevenson, Simon
  7. Simple Robust Tests for the Specification of High-Frequency Predictors of a Low-Frequency Series By J. Isaac Miller
  8. Forecasting Turning Points in Real Estate Yields By Tsolacos, Sotiris; Brooks, Chris
  9. Asymmetric Realized Volatility Risk By David E. Allen; Michael McAleer; Marcel Scharth
  10. Predicting bank insolvency in the Middle East and North Africa By Calice, Pietro
  11. Interpretability versus out-of-sample prediction performance in spatial hedonic models By Christensen, Bjarke; Sørensen, Tony Vittrup

  1. By: Tao Hong; Katarzyna Maciejowska; Jakub Nowotarski; Rafal Weron
    Abstract: Probabilistic load forecasting is becoming crucial in today's power systems planning and operations. We propose a novel methodology to compute interval forecasts of electricity demand, which applies a Quantile Regression Averaging (QRA) technique to a set of independent expert point forecasts. We demonstrate the effectiveness of the proposed methodology using data from the hierarchical load forecasting track of the Global Energy Forecasting Competition 2012. The results show that the new method is able to provide better prediction intervals than four benchmark models for the majority of the load zones and the aggregated level.
    Keywords: Electric load; Probabilistic forecasting; Prediction interval; Quantile regression; Forecasts combination; Expert forecast
    JEL: C22 C32 C38 C53 Q47
    Date: 2014–07–15
    URL: http://d.repec.org/n?u=RePEc:wuu:wpaper:hsc1410&r=for
  2. By: Dimitris P. Louzis (Bank of Greece)
    Abstract: We examine the ability of large-scale vector autoregressions (VARs) to produce accurate macroeconomic (output and inflation) and credit (loans and lending rates) forecasts in Greece, during the latest sovereign debt crisis. We implement recently proposed Bayesian shrinkage techniques and we evaluate the information content of forty two (42) monthly macroeconomic and financial variables in a large Bayesian VAR context, using a five year out-of-sample forecasting period from 2008 to 2013. The empirical results reveal that, overall, large-scale Bayesian VARs, enhanced with key financial variables and coupled with the appropriate level of shrinkage, outperform their small- and medium-scale counterparts with respect to both macroeconomic and credit variables. The forecasting superiority of large Bayesian VARs is particularily clear at long-term forecasting horizons. Finally, empirical evidence suggests that large Bayesian VARs can significantly improve the directional forecasting accuracy of small VARs with respect to loans and lending rates variables.
    Keywords: Forecasting; Bayesian VARs; Crisis; Financial variables
    Date: 2014–06
    URL: http://d.repec.org/n?u=RePEc:bog:wpaper:184&r=for
  3. By: Knut Are Aastveit (Norges Bank (Central Bank of Norway)); Claudia Foroni (Norges Bank (Central Bank of Norway)); Francesco Ravazzolo (Norges Bank (Central Bank of Norway))
    Abstract: In this paper we derive a general parametric bootstrapping approach to compute density forecasts for various types of mixed-data sampling (MIDAS) regressions. We consider both classical and unrestricted MIDAS regressions with and without an autoregressive component. First, we compare the forecasting performance of the different MIDAS models in Monte Carlo simulation experiments. We find that the results in terms of point and density forecasts are coherent. Moreover, the results do not clearly indicate a superior performance of one of the models under scrutiny when the persistence of the low frequency variable is low. Some differences are instead more evident when the persistence is high, for which the ARMIDAS and the AR-U-MIDAS produce better forecasts. Second, in an empirical exercise we evaluate density forecasts for quarterly US output growth, exploiting information from typical monthly series. We find that MIDAS models applied to survey data provide accurate and timely density forecasts.
    Keywords: Mixed data sampling, Density forecasts, Nowcasting
    JEL: C11 C53 E37
    Date: 2014–07–18
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2014_10&r=for
  4. By: Dietzel, Marian Alexander; Braun, Nicole; Schäfers, Wolfgang
    Abstract: Purpose – This article examines internet search query data provided by ‘Google Trends’, with respect to its ability to serve as a sentiment indicator and improve commercial real estate forecasting models for transactions and price indices.Methodology – The study uses data from the two largest data providers of US commercial real estate repeat sales indices, namely CoStar and Real Capital Analytics. We design three groups of models: baseline models including fundamental macro data only, those including Google data only and models combining both sets of data.One-month-ahead forecasts based on VAR models are conducted to compare the forecast accuracy of the models.Findings – The empirical results show that all models augmented with Google data, combining both macro and search data, significantly outperform baseline models which abandon internet search data. Models based on Google data alone, outperform the baseline models in 82% of cases. The models achieve a reduction over the baseline models of the mean squared forecasting error (MSE) for transactions and prices of up to 35% and 54% respectively.Practical Implications – The results suggest that Google data can serve as early market indicators. The findings of this study suggest that the inclusion of Google search data in forecasting models can improve forecast accuracy significantly. This implies that commercial real estate forecasters should consider incorporating this free and timely data set into their market forecasts or when performing plausibility checks for future investment decisions.Originality – This is the first paper applying Google search query data to the commercial real estate sector.
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2014_17&r=for
  5. By: Jadevicius, Arvydas; Sloan, Brian; Brown, Andrew
    Abstract: The paper investigates whether complex property market forecasting techniques are better at forecasting than simple specifications. As the research and initial modelling results suggest, simple models outperform the more complex structures. It therefore calls analysts to make forecasts more user-friendly, and for researchers to pay greater attention to the development and improvement of simpler forecasting techniques or simplification of more complex structures. Further planned research will present an alternative simple modelling approach, which was successfully employed by economists, helping to achieve greater predictive outcomes.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2013_10&r=for
  6. By: Papastamos, Dimitrios; Stevenson, Simon
    Abstract: The paper compares and contrast the bias, accuracy, and uncertainty in US housing start forecasts. In comparison to the large forecast accuracy literature to have considered macroeconomic series a relatively smaller number of papers have considered the accuracy of property forecasts. Even in this case the majority have looked at commercial real estate. Papers such as McAllister at al. (2007), Bond & Mitchell (2011) and Matysiak et al. (2012) all consider aspects of the IPF Consensus Forecasts for the UK commercial market. Two recent papers (Pierdzioch et al., 2012, 2013) do consider forecasts of housing starts. Our analysis compliments the analysis contained in Pierdzioch et al. (2012, 2013) who concentrate on whether forecasters tend to herd. This paper examines one-year-ahead forecasts, provided by Consensus Economics, covering a total sample period of 1989 to 2012. The focus of our paper is initially on the overall accuracy of the forecasts of US housing starts. Using conventional measures of forecast accuracy we consider the overall performance of the professional forecasts over the course of the an extended cycle. Using a panel framework we then consider the rationality (i.e. bias and efficiency) of the forecasts based upon the Holden & Peel (1990) framework. Finally, building upon papers such as Fulerton et al. (2000, 2001) and Stevenson & Young (2007), we compare the accuracy of the published forecasts with econometrically estimated forecasts.
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2014_80&r=for
  7. By: J. Isaac Miller (Department of Economics, University of Missouri-Columbia)
    Abstract: I propose two simple variable addition test statistics for three tests of the specification of high-frequency predictors in a model to forecast a series observed at a lower frequency. The first is similar to existing test statistics and I show that it is robust to biased forecasts, integrated and cointegrated predictors, and deterministic trends, while it is feasible and consistent even if estimation is not feasible under the alternative. It is not robust to biased forecasts with integrated predictors under the null of a fully aggregated predictor, and size distortion may be severe in this case. The second test statistic proposed is an easily implemented modification of the first that sacrifices some power in small samples but is also robust to this case.
    Keywords: temporal aggregation, mixed-frequency model, MIDAS, variable addition test, forecasting model comparison
    JEL: C12 C22
    Date: 2014–07–14
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:1412&r=for
  8. By: Tsolacos, Sotiris; Brooks, Chris
    Abstract: Determining the behaviour of yields remains a significant area of research in the real estate field. The last cycle reminded investors of the impact on values from the sudden and largely unpredictable yield changes. Initially, capital values took a major hit from yield rises but subsequently quick reversals in this path generated investment opportunities in several markets. Early detection of yield movements and existence of advance signals about likely forthcoming adjustments in yields is of significant value to investors and lenders. The main interest in this study is to study the predictive content of leading indicator series for turning points in real estate yields defined, in this study, to be the times when yields compress (begin a downward trend) and rise (start following an upward path). More specifically the objective is to take a forward-looking stance and generate probability signals of imminent movements in yields that will represent actionable information for investors. The majority of the previous analysis of yield movements is focused on regression analysis and traditional time-series models such as ARIMAs and vector autoregressions. Such models provide the basis for point forecasts for yields which to a degree can pick up turning points. The present study employs a dichotomous-variable methodology. A probit model, which is known as the natural model to use for the prediction of turning points, is constructed to interpret signals from leading indicators for possible yield swings. The leading indicators represent economic leading indicators, financial and other spreads and expectations-sentiment data. Apparently this approach differs from the previous work on yield forecasting given the focus on calculating turning point probabilities. However the probit based outcomes complement the analysis and predictions from other modelling and forecasting methodologies.Prime office yield data for Munich, London West End, Paris CBD and Madrid are used. The selected office centres aim to test the probit approach with leading indicators in geographies which have had different experiences during the eurozone sovereign debt crisis. The evaluation of the resulting models takes place with the commonly used criteria applied to binary models. However, the probability forecasts will be assessed explicitly in the context of the realised yield swings in the recent cycle. Finally the study provides forecasts for turning points outside the sample period.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2013_219&r=for
  9. By: David E. Allen; Michael McAleer (University of Canterbury); Marcel Scharth
    Abstract: In this paper we document that realized variation measures constructed from highfrequency returns reveal a large degree of volatility risk in stock and index returns, where we characterize volatility risk by the extent to which forecasting errors in realized volatility are substantive. Even though returns standardized by ex post quadratic variation measures are nearly gaussian, this unpredictability brings considerably more uncertainty to the empirically relevant ex ante distribution of returns. Explicitly modeling this volatility risk is fundamental. We propose a dually asymmetric realized volatility model, which incorporates the fact that realized volatility series are systematically more volatile in high volatility periods. Returns in this framework display time varying volatility, skewness and kurtosis. We provide a detailed account of the empirical advantages of the model using data on the S&P 500 index and eight other indexes and stocks.
    Keywords: Realized volatility, volatility of volatility, volatility risk, value-at-risk, forecasting, conditional heteroskedasticity
    JEL: C58 G12
    Date: 2014–07–17
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:14/20&r=for
  10. By: Calice, Pietro
    Abstract: This paper uses a panel of annual observations for 198 banks in 19 Middle East and North Africa countries over 2001-12 to develop an early warning system for forecasting bank insolvency based on a multivariate logistic regression framework. The results show that the traditional CAMEL indicators are significant predictors of bank insolvency in the region. The predictive power of the model, both in-sample and out-of-sample, is reasonably good, as measured by the receiver operating characteristic curve. The findings of the paper suggest that banking supervision in the Middle East and North Africa could be strengthened by introducing a fundamentals-based, off-site monitoring system to assess the soundness of financial institutions.
    Keywords: Banks&Banking Reform,Bankruptcy and Resolution of Financial Distress,Access to Finance,Financial Crisis Management&Restructuring,Debt Markets
    Date: 2014–07–01
    URL: http://d.repec.org/n?u=RePEc:wbk:wbrwps:6969&r=for
  11. By: Christensen, Bjarke; Sørensen, Tony Vittrup
    Abstract: The presence of spatial correlation in the error terms is a well-known concern in the estimation of hedonic real estate valuation models. Several methods have been devised to address this issue in order to improve precision of parameter estimates and model predictions. When the hedonic model is used for valuation of externalities or amenities, such as noise or green spaces, focus is on reducing omitted variable bias on parameter estimates. In this context, interpretability of the spatial component is paramount. In the context of appraisal or automatic valuation, focus is primarily on out-of-sample prediction performance. This dichotomy of objectives implies a potential trade-off between interpretability of parameter estimates and out-of-sample prediction accuracy. Here, we compare four methods in terms of their suitability for amenity valuation and real estate valuation respectively. These include an aspatial OLS model, the fixed spatial effects OLS model, the spatial error model and a spatial generalized additive model utilizing a soap film smoother over geographic coordinates. Each model is estimated under differing spatial specifications, utilizing a rich, Danish dataset, and the trade-off between predictive accuracy and stability of parameter estimates for a spatial variable of interest is studied.
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2014_96&r=for

This nep-for issue is ©2014 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.