nep-for New Economics Papers
on Forecasting
Issue of 2010‒06‒18
eleven papers chosen by
Rob J Hyndman
Monash University

  1. Forecast Combinations By Carlos Capistrán; Allan Timmermann; Marco Aiolfi
  2. Nelson-Siegel, affine and quadratic yield curve specifications: which one is better at forecasting? By Ken Nyholm; Rositsa Vidova-Koleva
  3. Forecasting Issues: Ideas of Decomposition and Combination By Marina Theodosiou
  4. An out-of-sample test for nonlinearity in financial time series: An empirical application By Theodore Panagiotidis
  5. Estimating International Transmission of Shocks Using GDP Forecasts: India and Its Trading Partners By Kajal Lahiri; Gultekin Isiklar
  6. Forecasting The Pricing Kernel of IBNR Claims Development In Property-Casualty Insurance By Cadogan, Godfrey
  7. Analyzing Three-Dimensional Panel Data of Forecasts By Kajal Lahiri; Antony Davies; Xuguang Sheng
  8. Variance Risk Premiums and Predictive Power of Alternative Forward Variances in the Corn Market By Zhiguang Wang; Scott W. Fausti; Bashir A. Qasmi
  9. Nowcasting By Martha Banbura; Domenico Giannone; Lucrezia Reichlin
  10. Learning Machines Supporting Bankruptcy Prediction By Wolfgang Karl Härdle; Rouslan Moro; Linda Hoffmann
  11. Prediction accuracy and sloppiness of log-periodic functions By David Br\'ee; Damien Challet; Pier Paolo Peirano

  1. By: Carlos Capistrán; Allan Timmermann; Marco Aiolfi
    Abstract: We consider combinations of subjective survey forecasts and model-based forecasts from linear and non-linear univariate specifications as well as multivariate factor-augmented models. Empirical results suggest that a simple equal-weighted average of survey forecasts outperform the best model-based forecasts for a majority of macroeconomic variables and forecast horizons. Additional improvements can in some cases be gained by using a simple equal-weighted average of survey and model-based forecasts. We also provide an analysis of the importance of model instability for explaining gains from forecast combination. Analytical and simulation results uncover break scenarios where forecast combinations outperform the best individual forecasting model.
    Keywords: Factor Based Forecasts, Non-linear Forecasts, Structural Breaks, Survey Forecasts, Univariate Forecasts.
    JEL: C53 E
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:bdm:wpaper:2010-04&r=for
  2. By: Ken Nyholm (European Central Bank, Risk Management Division, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Rositsa Vidova-Koleva
    Abstract: In this paper we compare the in-sample fit and out-of-sample forecasting performance of no-arbitrage quadratic and essentially affine term structure models, as well as the dynamic Nelson-Siegel model. In total eleven model variants are evaluated, comprising five quadratic, four affine and two Nelson-Siegel models. Recursive re-estimation and out-of-sample one-, six- and twelve-months ahead forecasts are generated and evaluated using monthly US data for yields observed at maturities of 1, 6, 12, 24, 60 and 120 months. Our results indicate that quadratic models provide the best in-sample fit, while the best out-of-sample performance is generated by three-factor affine models and the dynamic Nelson-Siegel model variants. However, statistical tests fail to identify one single-best forecasting model class. JEL Classification: C14, C15, G12.
    Keywords: Nelson-Siegel model, affine term structure models, quadratic yield curve models, forecast performance.
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101205&r=for
  3. By: Marina Theodosiou (Central Bank of Cyprus)
    Abstract: Combination techniques and decomposition procedures have been applied to time series forecasting to enhance prediction accuracy and to facilitate the analysis of data respectively. However, the restrictive complexity of some combination techniques and the difficulties associated with the application of the decomposition results to the extrapolation of data, mainly due to the large variability involved in economic and financial time series, have limited their application and compromised their development. This paper is a re-examination of the benefits and limitations of decomposition and combination techniques in the area of forecasting, and a contribution to the field with a new forecasting methodology. The new methodology is based on the disaggregation of time series components through the STL decomposition procedure, the extrapolation of linear combinations of the disaggregated sub-series, and the reaggregation of the extrapolations to obtain estimation for the global series. With the application of the methodology to the data from the NN3 and M1 Competition series, the results suggest that it can outperform other competing statistical techniques. The power of the method lies in its ability to perform consistently well, irrespective of the characteristics, underlying structure and level of noise of the data.
    Keywords: ARIMA models, combining forecasts, decomposition, error measures, evaluating forecasts, forecasting competitions, time series
    JEL: C53
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:cyb:wpaper:2010-4&r=for
  4. By: Theodore Panagiotidis (Department of Economics, University of Macedonia)
    Abstract: This paper employs a local information, nearest neighbour forecasting methodology to test for evidence of nonlinearity in financial time series. Evidence from well-known data generating process are provided and compared with returns from the Athens stock exchange given the in-sample evidence of nonlinear dynamics that has appeared in the literature. Nearest neighbour forecasts fail to produce more accurate forecasts from a simple AR model. This does not substantiate the presence of in-sample nonlinearity in the series.
    Keywords: nearest neighbour, nonlinearity
    JEL: C22 C53 G10
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:mcd:mcddps:2010_08&r=for
  5. By: Kajal Lahiri; Gultekin Isiklar
    Abstract: Using a Factor Structural Vector Autoregressive (FSVAR) model and monthly GDP growth forecasts during 1995-2003, we find that Indian economy responds largely to domestic and Asian common shocks, and much less to shocks the from the West. However, when we exclude the Asian crisis period from our sample, the Western factor comes out as strong as the Asian factor contributing 16% each to the Indian real GDP growth, suggesting that the dynamics of transmission mechanism is time-varying. Our methodology on the use of forecast data can help policy makers of especially developing countries with frequent economic crises and data limitations to adjust their policy targets in real time.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:nya:albaec:10-06&r=for
  6. By: Cadogan, Godfrey
    Abstract: A new method of forecasting the pricing kernel, i.e., stochastic claim inflation or link ratio function, of incurred but not reported (IBNR) claims (in property casualty insurance) from residuals in a dynamic claims forecast model is presented. We employ a pseudo Kalman filter approach by using claims risk exposure estimates to reconstruct innovations in stochastic claims development. Whereupon we find that the pricing kernel forecast is a product measure of the innovations. We show how these results impact performance measurement including but not limited to risk-adjusted return on capital by and through insurance accounting relationships for adjusted underwriting results; and loss ratio or pure premium calculations. Additionally, we show how, in the context of Wold decomposition, diagnostics from our model can be used to compute signal to noise ratio for, and cross check, unobservable pricing kernels used to forecast claims. Furthermore, we prove that a single risk exposure factor connects seemingly unrelated specifications for loss link ratio, and claims volatility.
    Keywords: IBNR claims ladder; claims reserve forecast; stochastic claim inflation; claims risk exposure; link ratio function; property-casualty insurance; insurance accounting
    JEL: C53 G22 M49
    Date: 2010–06–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:23235&r=for
  7. By: Kajal Lahiri; Antony Davies; Xuguang Sheng
    Abstract: With the proliferation of quality multi-dimensional surveys, it becomes increasingly important for researchers to employ an econometric framework in which these data can be properly analyzed and put to their maximum use. In this chapter we have summarized such a framework developed in Davies and Lahiri (1995, 1999), and illustrated some of the uses of these multi-dimensional panel data. In particular, we have characterized the adaptive expectations mechanism in the context of broader rational and implicit expectations hypotheses, and suggested ways of testing one hypothesis over the others. We find that, under the adaptive expectations model, a forecaster who fully adapts to new information is equivalent to a forecaster whose forecast bias increases linearly with the forecast horizon. A multi-dimensional forecast panel also provides the means to distinguish between anticipated and unanticipated changes in the forecast target as well as volatilities associated with the anticipated and unanticipated changes. We show that a proper identification of anticipated changes and their perceived volatilities are critical to the correct understanding and estimation of forecast uncertainty. In the absence of such rich forecast data, researchers have typically used the variance of forecast errors as proxies for shocks. It is the perceived volatility of the anticipated change and not the (subsequently-observed) volatility of the target variable or the unanticipated change that should condition forecast uncertainty. This is because forecast uncertainty is formed when a forecast is made, and hence anything that was unknown to the forecaster when the forecast was made should not be a factor in determining forecast uncertainty. This finding has important implications on how to estimate forecast uncertainty in real time and how to construct a measure of average historical uncertainty, cf. Lahiri and Sheng (2010a). Finally, we show how the Rational Expectations hypothesis should be tested by constructing an appropriate variance-covariance matrix of the forecast errors when a specific type of multidimensional panel data is available.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:nya:albaec:10-07&r=for
  8. By: Zhiguang Wang (South Dakota State University Department of Economics); Scott W. Fausti (South Dakota State University Department of Economics); Bashir A. Qasmi (South Dakota State University Department of Economics)
    Abstract: We propose a fear index for corn using the variance swap rate synthesized from out-of-the-money call and put options as a measure of implied variance. Previous studies estimate implied variance based on Black (1976) model or forecast variance using the GARCH models. Our implied variance approach, based on variance swap rate, is model independent. We compute the daily 60-day variance risk premiums based on the difference between the realized variance and implied variance for the period from 1987 to 2009. We find negative and time-varying variance risk premiums in the corn market. Our results contrast with Egelkraut, Garcia, and Sherrick (2007), but are in line with the findings of Simon (2002). We conclude that our synthesized implied variance contains superior information about future realized variance relative to the implied variance estimates based on the Black (1976) model and the variance forecasted using the GARCH(1,1) model.
    Keywords: Variance Risk Premium, Variance Swap, Model-free Variance, Implied Variance, Realized Variance, Corn VIX
    JEL: Q13 Q14 G13 G14
    Date: 2010–05
    URL: http://d.repec.org/n?u=RePEc:sda:staffp:100001&r=for
  9. By: Martha Banbura; Domenico Giannone; Lucrezia Reichlin
    Abstract: Economists have imperfect knowledge of the present state of the economy and even of the recent past. Many key statistics are released with a long delay and they are subsequently revised. As a consequence, unlike weather forecasters, who know what is the weather today and only have to predict the weather tomorrow, economists have to forecast the present and even the recent past. The problem of predicting the present, the very near future and the very recent past is labelled as nowcasting and is the subject of this paper.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/57648&r=for
  10. By: Wolfgang Karl Härdle; Rouslan Moro; Linda Hoffmann
    Abstract: In many economic applications it is desirable to make future predictions about the financial status of a company. The focus of predictions is mainly if a company will default or not. A support vector machine (SVM) is one learning method which uses historical data to establish a classification rule called a score or an SVM. Companies with scores above zero belong to one group and the rest to another group. Estimation of the probability of default (PD) values can be calculated from the scores provided by an SVM. The transformation used in this paper is a combination of weighting ranks and of smoothing the results using the PAV algorithm. The conversion is then monotone. This discussion paper is based on the Creditreform database from 1997 to 2002. The indicator variables were converted to financial ratios; it transpired out that eight of the 25 were useful for the training of the SVM. The results showed that those ratios belong to activity, profitability, liquidity and leverage. Finally, we conclude that SVMs are capable of extracting the necessary information from financial balance sheets and then to predict the future solvency or insolvent of a company. Banks in particular will benefit from these results by allowing them to be more aware of their risk when lending money.
    Keywords: Support Vector Machine, Bankruptcy, Default Probabilities Prediction, Profitability
    JEL: C14 G33 C45
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2010-032&r=for
  11. By: David Br\'ee; Damien Challet; Pier Paolo Peirano
    Abstract: We show that log-periodic power-law (LPPL) functions are intrinsically very hard to fit to time series. This comes from their sloppiness, the squared residuals depending very much on some combinations of parameters and very little on other ones. The time of singularity that is supposed to give an estimate of the day of the crash belongs to the latter category. We discuss in detail why and how the fitting procedure must take into account the sloppy nature of this kind of model. We then test the reliability of LPPLs on synthetic AR(1) data replicating the Hang Seng 1987 crash and show that even this case is borderline regarding predictability of divergence time. We finally argue that current methods used to estimate a probabilistic time window for the divergence time are likely to be over-optimistic.
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1006.2010&r=for

This nep-for issue is ©2010 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.