nep-for New Economics Papers
on Forecasting
Issue of 2014‒06‒07
nine papers chosen by
Rob J Hyndman
Monash University

  1. Forecasting the Price of Gold By Hossein Hassani; Emmanuel Sirimal Silva; Rangan Gupta
  2. Forecasting German Key Macroeconomic Variables Using Large Dataset Methods By Inske Pirschel; Maik Wolters
  3. The Predictive Performance of Fundamental Inflation Concepts: An Application to the Euro Area and the United States By Stephen McKnight; Alexander Mihailov; Kerry Patterson; Fabio Rumler
  4. Factor High-Frequency Based Volatility (HEAVY) Models By Kevin Sheppard
  5. Default Prediction for Small-Medium Enterprises in France: A comparative approach By Sami BEN JABEUR; Youssef FAHMI
  7. Integration of a Predictive, Continuous Time Neural Network into Securities Market Trading Operations By Christopher S Kirk
  8. Categorization and Coordination By Vessela Daskalova; Nicolaas J. Vriend
  9. Confirmation: What's in the evidence? By Kataria, Mitesh

  1. By: Hossein Hassani (The Statistical Research Centre, Bournemouth University, UK); Emmanuel Sirimal Silva (The Statistical Research Centre, Bournemouth University, UK); Rangan Gupta (Department of Economics, University of Pretoria, Pretoria, South Africa)
    Abstract: This paper seeks to evaluate the appropriateness of a variety of existing forecasting techniques (17 methods) at providing accurate, and statistically significant forecasts for gold price. We report the results from the 9 most competitive techniques. Special consideration is given to the ability of these techniques at providing forecasts which outperforms the random walk as we noticed that certain multivariate models (which included prices of silver, platinum, palladium and rhodium, besides gold) were also unable to outperform the random walk in this case. Interestingly, the results show that none of the forecasting techniques are able to outperform the random walk at horizons of 1 and 9 steps ahead, and on average the Exponential Smoothing model is seen providing the best forecasts in terms of the lowest root mean squared error over the 24 months forecasting horizons. Moreover, we find that the univariate models used in this paper are able to outperform the Bayesian autoregression, and Bayesian vector autoregressive models, with exponential smoothing (ETS) reporting statistically significant results in comparison to the former models, and classical autoregressive and the vector autoregressive models in most cases.
    Keywords: ARIMA, ETS, TBATS, ARFIMA, AR, VAR, BAR, BVAR, Random Walk, Gold, Forecast, Multivariate, Univariate
    Date: 2014–06
  2. By: Inske Pirschel; Maik Wolters
    Abstract: We study the forecasting performance of three alternative large scale approaches using a dataset for Germany that consists of 123 variables in quarterly frequency. These three approaches handle the dimensionality problem evoked by such a large dataset by aggregating information, yet on different levels. We consider different factor models, a large Bayesian vector autoregression and model averaging techniques, where aggregation takes place before, during and after the estimation of the different models, respectively. We find that overall the large Bayesian VAR and the Bayesian factor augmented VAR provide the most precise forecasts for a set of eleven core macroeconomic variables, including GDP growth and CPI inflation, and that the performance of these two models is relatively robust to model misspecification. However, our results also indicate that in many cases the gains in forecasting accuracy relative to a simple univariate autoregression are only moderate and none of the models would have been able to predict the Great Recession
    Keywords: Large Bayesian VAR, Model averaging, Factor models, Great Recession
    JEL: C53 E31 E32 E37 E47
    Date: 2014–05
  3. By: Stephen McKnight (Centro de Estudios Económicos, El Colegio de México); Alexander Mihailov (Department of Economics, University of Reading); Kerry Patterson (Department of Economics, University of Reading); Fabio Rumler (Oesterreichische Nationalbank, Economic Analysis Division)
    Abstract: Does theory aid inflation forecasting? To date, the evidence suggests that there is no systematic advantage of theory-based models of inflation dynamics over their astructural counterparts. This study reconsiders the issue by developing a “semi-structural” forecasting procedure comprised of two key ingredients. First, a prediction for the cyclical component of inflation is obtained employing the concept of “fundamental inflation”. The latter is computed from a canonical two-country monetary model, either via estimation of the reduced-form parameters of the New Keynesian Phillips Curve by the generalized methods of moments, or via calibration of its structural parameters. The computation of fundamental inflation requires multistep forecasts for the model-implied cyclical inflation drivers, which we generate via respective auxiliary vector autoregressions. Second, a driftless random walk prediction is employed for the trend component of inflation, on which theory has little to say. Using quarterly data for both the United States and the Euro Area for the period 1970-2010, and rolling window re-estimation to accommodate gradual structural change, we find that such semi-structural inflation forecasts outperform conventional univariate forecasts at all examined horizons. Our results thus suggest that theory can indeed play an important role in forecasting inflation, when appropriately combined with relevant data-driven features.
    Keywords: fundamental inflation, New Keynesian Phillips Curve, inflation dynamics, predictive accuracy, money in the open economy, semi-structural forecasting
    JEL: C52 C53 E31 E37 F41 F47
    Date: 2014–05–29
  4. By: Kevin Sheppard
    Abstract: �We propose a new class of multivariate volatility models utilizing realized measures of asset volatility and covolatility extracted from high-frequency data. Dimension reduction for estimation of large covariance matrices is achieved by imposing a factor structure with time-varying conditional factor loadings. Statistical properties of the model, including conditions that ensure covariance stationary or returns, are established. The model is applied to modeling the conditional covariance data of large U.S. financial institutions during the financial crisis, where empirical results show that the new model has both superior in- and out-of-sample properties. We show that the superior performance applies to a wide range of quantities of interest, including volatilities, covolatilities, betas and scenario-based risk measures, where the model's performance is particularly strong at short forecast horizons. �
    Keywords: Conditional Beta, Conditional Covariance, Forecasting, HEAVY, Marginal Expected Shortfall, Realized Covariance, Realized Kernel, Systematic Risk
    JEL: C32 C53 C58 G17 G21
    Date: 2014–05–30
  5. By: Sami BEN JABEUR; Youssef FAHMI
    Abstract: The aim of this paper is to compare between three statistical methods in predicting corporate financial distress. We will use the Discriminant Analysis, Logit model and Random Forest. These approaches are based on a sample of 800 companies during the period from 2006 to 2008, as well as on the use of 33 financial ratios. The results show a superiority of the random forest approach.
    Date: 2014–06–02
  6. By: Sami BEN JABEUR; Youssef FAHMI
    Abstract: The aim of this paper to compare between two statistical methods in predicting corporate financial distress. We will use the PLS (Partial Least-Squares) discriminant analysis and support vector machine (SVM). The PLS discriminant analysis (PLS-DA) regression is a method connecting a qualitative variable dependent to a unit on quantitative or qualitative explanatory variables. The SVM may be viewed as non-parametric techniques. It is based on the use of so-called kernel function which allows optimal separation of data. In this work we propose to use a French firm for which some financial ratios are calculated.
    Keywords: financial distress prediction, PLS discriminant analysis, Support Vector Machine
    Date: 2014–06–02
  7. By: Christopher S Kirk
    Abstract: This paper describes recent development and test implementation of a continuous time recurrent neural network that has been configured to predict rates of change in securities. It presents outcomes in the context of popular technical analysis indicators and highlights the potential impact of continuous predictive capability on securities market trading operations.
    Date: 2014–06
  8. By: Vessela Daskalova (University of Cambridge); Nicolaas J. Vriend (Queen Mary University of London)
    Abstract: The use of coarse categories is prevalent in various situations and has been linked to biased economic outcomes, ranging from discrimination against minorities to empirical anomalies in financial markets. In this paper we study economic rationales for categorizing coarsely. We think of the way one categorizes one's past experiences as a model of the world that is used to make predictions about unobservable attributes in new situations. We first show that coarse categorization may be optimal for making predictions in stochastic environments in which an individual has a limited number of past experiences. Building on this result, and this is a key new insight from our paper, we show formally that cases in which people have a motive to coordinate their predictions with others may provide an economic rationale for categorizing coarsely. Our analysis explains the intuition behind this rationale.
    Keywords: Categorization, Prediction, Decision-making, Coordination, Learning
    JEL: D83 C72
    Date: 2014–05
  9. By: Kataria, Mitesh (Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: The difference between accommodated evidence (i.e. when evidence is known first and a hypothesis is proposed to explain and fit the observations) and predicted evidence (i.e., when evidence verifies the prediction of a hypothesis formulated before observing the evidence) is investigated. According to Bayesian confirmation theory, accommodated and predicted evidence constitute equally strong confirmation. Using a survey experiment on a sample of students, however, it is shown that predicted evidence is perceived to constitute stronger confirmation than accommodated evidence. Turning to the question why, we find that trusting a model to predict correctly is intrinsically related to trust in the proposers’ (i.e., the scientists’) level of knowledge, and subject’ are more persuaded by the proposer ability to utilize this knowledge to predict in the future if the proposer in the past shown to be successful in predicting rather than accommodating evidence. The existence of such an indirect relationship between hypothesis and evidence can be considered to impose undesirable subjectivity and arbitrariness on questions of evidential support. Evidential support is ideally a direct and impersonal relationship between hypothesis and evidence and not an indirect and personal relationship as it has shown to be in this paper.
    Keywords: Evidence; Prediction; Postdiction; Retrodiction; Post-hoc analysis; Methodology
    JEL: C11 C12 C80
    Date: 2014–05

This nep-for issue is ©2014 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.