nep-for New Economics Papers
on Forecasting
Issue of 2013‒03‒16
eighteen papers chosen by
Rob J Hyndman
Monash University

  1. Martingale unobserved component models By Neil Shephard
  2. Forecasting South African Macroeconomic Data with a Nonlinear DSGE Model By Mehmet Balcilar; Rangan Gupta; Kevin Kotze
  3. Macroeconomic Forecasting Using Low-Frequency Filters By João Valle e Azevedo; Ana Pereira
  4. Unpredictability in Economic Analysis, Econometric Modeling and Forecasting By David F. Hendry; Grayham E. Mizon
  5. Will technological progress be sufficient to effectively lead the air transport to a sustainable development in the mid-term (2025)?. By Chèze, Benoit; Chevallier, Julien; Gastineau, Pascal
  6. Robust Estimation and Forecasting of the Capital Asset Pricing Model By Guorui Bian; Michael McAleer; Wing-Keung Wong
  7. Measuring Uncertainty about Long-Run Prediction By Ulrich Mueller; Mark W. Watson
  8. Policy-oriented macroeconomic forecasting with hybrid DGSE and time-varying parameter VAR models By Stelios Bekiros; Alessia Paccagnini
  9. Transportation Data as a Tool for Nowcasting Economic Activity – The German Road Pricing System as an Example By Roland Döhrn
  10. A Hybrid Approach for Forecasting of Oil Prices Volatility By Komijani, Akbar; Naderi, Esmaeil; Gandali Alikhani, Nadiya
  11. Do High-Frequency Data Improve High-Dimensional Portfolio Allocations? By Nikolaus Hautsch; Lada M. Kyj; Peter Malec;
  12. Mortality beliefs distorted: Magnifying the risk of dying young By Peter Jarnebrant; Kristian Ove R. Myrseth
  13. What Do Experts Know About Forecasting Journal Quality? A Comparison with ISI Research Impact in Finance By Chia-Lin Chang; Michael McAleer
  14. Biofuels and Food Prices: Searching for the Causal Link By Andrea Bastianin; Marzio Galeotti; Matteo Manera
  15. Should Central Banks publish interest rate forecasts? - A Survey By Phan, Tuan
  16. From Amazon to Apple: Modeling Online Retail Sales, Purchase Incidence and Visit Behavior By Anastasios Panagiotelis; Michael S. Smith; Peter J Danaher
  17. Non-Linear Taylor Rule through Threshold Estimation By Bhaduri, Saumitra; Sethudurai, Raja
  18. Long memory and structural breaks in modeling the return and volatility dynamics of precious metals By Mohamed El Hedi Arouri; Shawkat Hammoudeh; Amine Lahiani; Duc Khuong Nguyen

  1. By: Neil Shephard (Nuffield College and Dept of Economics, University of Oxford)
    Abstract: I discuss models which allow the local level model, which rationalised exponentially weighted moving averages, to have a time-varying signal/noise ratio. I call this a martingale component model. This makes the rate of discounting of data local. I show how to handle such models effectively using an auxiliary particle filter which deploys M Kalman filters run in parallel competing against one another. Here one thinks of M as being 1,000 or more. The model is applied to inflation forecasting. The model generalises to unobserved component models where Gaussian shocks are replaced by martingale difference sequences.
    Keywords: auxiliary particle filter; EM algorithm; EWMA; forecasting; Kalman filter; likelihood; martingale unobserved component model; particle filter; stochastic volatility.
    JEL: C01 C14 C58 D53 D81
    Date: 2013–02–10
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:1301&r=for
  2. By: Mehmet Balcilar (Department of Economics, Eastern Mediterranean University, Famagusta, North Cyprus,via Mersin 10, Turkey); Rangan Gupta (Department of Economics, University of Pretoria); Kevin Kotze (The School of Economics, Faculty of Commerce, University of Cape Town)
    Abstract: This paper considers the forecasting performance of a nonlinear dynamic stochastic general equilibrium (DSGE) model. The results are compared to a wide selection of competing models, which include a linear DSGE model and a variety of vector autoregressive (VAR) models. The parameters in the VAR models are estimated with classical and Bayesian techniques; where some of the Bayesian models are augmented with stochastic-variable-selection, time-varying parameters, endogenous structural breaks and various forms of prior-shrinkage (which includes the Minnesota prior as well). The structure of the DSGE models follows that of New-Keynesian varieties, which allow for several nominal and real rigidities. The nonlinear DSGE model makes use of the second-order solution method of Schmitt-Grohe and Uribe (2004) and a particle filter to generate values for the unobserved variables. Most of the parameters in the models are estimated using maximum likelihood techniques. The models are applied to South African macroeconomic data, with an initial in-sample period of 1960Q1 to 1999Q4. The models are then estimated recursively, by extending the in-sample period by a quarter, to generate successive forecasts over the out-of-sample period, 2000Q1 to 2011Q4. We find that the forecasting performance of the nonlinear DSGE model is almost always significantly superior to that of it's linear counterpart; particularly over longer forecasting horizons. The nonlinear DSGE model also outperforms the selection of VAR models in most cases.
    Keywords: Macroeconomic Forecasting, Linear and Nonlinear New-Keynesian DSGE, Vector Autoregressions, Bayesian Methods
    JEL: E0 C5 C11 C61 C63
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:201313&r=for
  3. By: João Valle e Azevedo; Ana Pereira
    Abstract: We explore the use of univariate low-frequency filters in macroeconomic forecasting. This amounts to targeting only specific fluctuations of the time series of interest. We show through simulations that such approach is warranted and, using US data, we confirm empirically that consistent gains in forecast accuracy can be obtained in comparison with a variety of other methods. There is an inherent arbitrariness in the choice of the cut-off defining low and high frequencies, but we show that some patterns characterize the implied optimal (for forecasting) degree of smoothing of the key macroeconomic indicators we analyze. For most variables the optimal choice amounts to disregarding fluctuations well below the standard business cycle cut-off of 32 quarters while generally increasing with the forecast horizon; for inflation and variables related to housing this cut-off lies around 32 quarters for all horizons, which is below the optimal level for federal spending.<br />
    JEL: C14 C32 C51 C53
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w201301&r=for
  4. By: David F. Hendry (Department of Economics and Institute of Economic Modelling, Oxford Martin School, University of Oxford); Grayham E. Mizon (University of Southampton and Institute of Economic Modelling, Oxford Martin School, University of Oxford)
    Abstract: Unpredictability arises from intrinsic stochastic variation, unexpected instances of outliers, and unanticipated extrinsic shifts of distributions. We analyze their properties, relationships, and different effects on the three arenas in the title, which suggests considering three associated information sets. The implications of unanticipated shifts for forecasting, economic analyses of efficient markets, conditional expectations, and inter-temporal derivations are described. The potential success of general-to-specific model selection in tackling location shifts by impulse-indicator saturation is contrasted with the major difficulties confronting forecasting.
    Keywords: Unpredictability; ‘Black Swans’; Distributional shifts; Forecast failure; Model selection; Conditional expectations.
    JEL: C51 C22
    Date: 2013–02–25
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:1304&r=for
  5. By: Chèze, Benoit; Chevallier, Julien; Gastineau, Pascal
    Abstract: The aim of this article is to investigate whether anticipated technological progress can be expected to be strong enough to offset carbon dioxide (CO2)emissions resulting from the rapid growth of air transport. Aviation CO2 emissions projections are provided at the worldwide level and for eight geographical zones until 2025. Total air traffic flows are first forecast using a dynamic panel-data econometric model and then converted into corresponding quantities of air traffic CO2 emissions, through jet fuel demand forecasts, using specific hypothesis and energy factors. None of our nine scenarios appears compatible with the objective of 450 ppm CO2-eq. (a.k.a. “scenario of type I”) recommended by the Intergovernmental Panel on Climate Change (IPCC). None is either compatible with the IPCC scenario of type III, which aims at limiting global warming to 3.2◦C. Thus, aviation CO2 emissions are unlikely to diminish over the next decade unless there is a radical shift in technology and/or travel demand is restricted.
    Keywords: Air transport; CO2 emissions; Forecasting; Climate change;
    JEL: C53 L93 Q47 Q54
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:ner:dauphi:urn:hdl:123456789/9262&r=for
  6. By: Guorui Bian (East China Normal University); Michael McAleer (Erasmus University Rotterdam, Kyoto University, Complutense University of Madrid); Wing-Keung Wong (Hong Kong Baptist University)
    Abstract: In this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more appropriate for estimating the parameters of the Capital Asset Pricing Model by comparing its performance with least squares estimators (LSE) on the monthly returns of US portfolios. The empirical results reveal that the MML estimators are more efficient than LSE in terms of the relative efficiency of one-step-ahead forecast mean square error in small samples
    Keywords: Maximum likelihood estimators; Modified maximum likelihood estimators; Student t family; Capital asset pricing model; Robustness
    JEL: C1 C2 G1
    Date: 2013–03–04
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20130036&r=for
  7. By: Ulrich Mueller; Mark W. Watson
    Abstract: Long-run forecasts of economic variables play an important role in policy, planning, and portfolio decisions. We consider long-horizon forecasts of average growth of a scalar variable, assuming that first differences are second-order stationary. The main contribution is the construction of predictive sets with asymptotic coverage over a wide range of data generating processes, allowing for stochastically trending mean growth, slow mean reversion and other types of long-run dependencies. We illustrate the method by computing predictive sets for 10 to 75 year average growth rates of U.S. real per-capita GDP, consumption, productivity, price level, stock prices and population.
    JEL: C22 C53 E17
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:18870&r=for
  8. By: Stelios Bekiros; Alessia Paccagnini
    Abstract: Micro-founded dynamic stochastic general equilibrium (DSGE) models appear to be particularly suited for evaluating the consequences of alternative macroeconomic policies. Recently, increasing efforts have been undertaken by policymakers to use these models for forecasting, although this proved to be problematic due to estimation and identi.cation issues. Hybrid DSGE models have become popular for dealing with some of model misspeci.cations and the trade-off between theoretical coherence and empirical fit, thus allowing them to compete in terms of predictability with VAR models. However, DSGE and VAR models are still linear and they do not consider time-variation in parameters that could account for inherent nonlinearities and capture the adaptive underlying structure of the economy in a robust manner. This study conducts a comparative evaluation of the out-of-sample predictive performance of many different specifications of DSGE models and various classes of VAR models, using datasets for the real GDP, the harmonized CPI and the nominal short-term interest rate series in the Euro area. Simple and hybrid DSGE models were implemented including DSGE-VAR and Factor Augmented DGSE, and tested against standard, Bayesian and Factor Augmented VARs. Moreover, a new state-space time-varying VAR model is presented. The total period spanned from 1970:1 to 2010:4 with an out-of-sample testing period of 2006:1-2010:4, which covers the global financial crisis and the EU debt crisis. The results of this study can be useful in conducting monetary policy analysis and macro-forecasting in the Euro area.
    Keywords: Model validation, Forecasting, Factor Augmented DSGE, Time-varying parameter VAR, DGSE-VAR, Bayesian analysis
    JEL: C11 C15 C32
    Date: 2013–02
    URL: http://d.repec.org/n?u=RePEc:mib:wpaper:236&r=for
  9. By: Roland Döhrn
    Abstract: There is a broad agreement that transportation activity is closely linked to the business cycle. Nevertheless, data from the transportation sector have not been part of the tool kit of business cycle analysts due to long publications lags. With the disseminations of electronic road pricing systems, up to date figures on transportation activity are available for an increasing number of countries. This paper analyses the performance of the German toll statistics for nowcasting industry production. It confirms that between January 2007, when the toll data were published first, and July 2012 the seasonally adjusted toll data show a closer correlation with industry production than business surveys like the ifo business climate or the PMI. Compared to this the forecasting power out of sample is disappointing. Though showing somewhat smaller forecast errors than the alternative models tested the advantage of the toll based models is not statistically significant as a rule. Given the small publication lead against industry production and the publication lag against business sentiment indicators one should not be over-enthusiastic on the opportunities of the toll data as a nowcasting tool, though they surely mean an addition to the business cycle analysts’ tool box.
    Keywords: Transportation data; nowcasting; forecasting performance
    JEL: E32 E37
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:rwi:repape:0395&r=for
  10. By: Komijani, Akbar; Naderi, Esmaeil; Gandali Alikhani, Nadiya
    Abstract: This study aims to introduce an ideal model for forecasting crude oil price volatility. For this purpose, the ‘predictability’ hypothesis was tested using the variance ratio test, BDS test and the chaos analysis. Structural analyses were also carried out to identify possible nonlinear patterns in this series. On this basis, Lyapunov exponents confirmed that the return series of crude oil price is chaotic. Moreover, according to the findings, the rate of return series has the long memory property rejecting the efficient market hypothesis and affirming the fractal markets hypothesis. The results of GPH test verified that both the rate of return and volatility series of crude oil price have the long memory property. Besides, according to both MSE and RMSE criteria, wavelet-decomposed data improve the performance of the model significantly. Therefore, a hybrid model was introduced based on the long memory property which uses wavelet decomposed data as the most relevant model.
    Keywords: Forecasting, Oil Price, Chaos, Wavelet Decomposition, Long Memory
    JEL: C53 C58 G17
    Date: 2013–01–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:44654&r=for
  11. By: Nikolaus Hautsch; Lada M. Kyj; Peter Malec;
    Abstract: This paper addresses the open debate about the usefulness of high-frequency (HF) data in large-scale portfolio allocation. We consider the problem of constructing global minimum variance portfolios based on the constituents of the S&P 500 over a four-year period covering the 2008 financial crisis. HF-based covariance matrix predictions are obtained by applying a blocked realized kernel estimator, different smoothing windows, various regularization methods and two forecasting models. We show that HF-based predictions yield a significantly lower portfolio volatility than methods employing daily returns. Particularly during the volatile crisis period, these performance gains hold over longer horizons than previous studies have shown and translate into substantial utility gains from the perspective of an investor with pronounced risk aversion.
    Keywords: portfolio optimization; spectral decomposition; regularization; blocked realized kernel; covariance prediction
    JEL: G11 G17 C58 C14 C38
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2013-014&r=for
  12. By: Peter Jarnebrant (ESMT European School of Management and Technology); Kristian Ove R. Myrseth (ESMT European School of Management and Technology)
    Abstract: We explore mortality beliefs by eliciting individual-level belief distributions for participants’ remaining lifespan. Across two independent samples, from Germany and the USA, we find that individuals—while accurately forecasting their life expectancy—substantially overestimate the likelihood of dying young (<50 years) and overestimate the likelihood of reaching very old age (>100 years). In other words, the modes of the belief distributions are relatively accurate, but the tails of the belief distributions are significantly ‘fatter’ than the corresponding tails of distributions obtained from demographic data. Our results are robust to variations in belief elicitation techniques, and to assumptions underlying normative longevity forecasts. The results have implications for a range of questions of economic behavior—including intertemporal choice, consumption smoothing, saving, and risk management.
    Keywords: mortality, beliefs, risk perception, judgment
    Date: 2013–02–28
    URL: http://d.repec.org/n?u=RePEc:esm:wpaper:esmt-13-03&r=for
  13. By: Chia-Lin Chang (Department of Applied Economics Department of Finance National Chung Hsing University, Taiwan); Michael McAleer (Econometric Institute Erasmus School of Economics Erasmus University Rotterdam and Tinbergen Institute The Netherlands and Department of Quantitative Economics Complutense University of Madrid and Institute of Economic Research Kyoto University)
    Abstract: Experts possess knowledge and information that are not publicly available. The paper is concerned with forecasting academic journal quality and research impact using a survey of international experts from a national project on ranking academic finance journals in Taiwan. A comparison is made with publicly available bibliometric data, namely the Thomson Reuters ISI Web of Science citations database (hereafter ISI) for the Business - Finance (hereafter Finance) category. The paper analyses the leading international journals in Finance using expert scores and quantifiable Research Assessment Measures (RAMs), and highlights the similarities and differences in the expert scores and alternative RAMs, where the RAMs are based on alternative transformations of citations taken from the ISI database. Alternative RAMs may be calculated annually or updated daily to answer the perennial questions as to When, Where and How (frequently) published papers are cited (see Chang et al. (2011a, b, c)). The RAMs include the most widely used RAM, namely the classic 2-year impact factor including journal self citations (2YIF), 2-year impact factor excluding journal self citations (2YIF*), 5-year impact factor including journal self citations (5YIF), Immediacy (or zero-year impact factor (0YIF)), Eigenfactor, Article Influence, C3PO (Citation Performance Per Paper Online), h-index, PI-BETA (Papers Ignored - By Even The Authors), 2-year Self-citation Threshold Approval Ratings (2Y-STAR), Historical Self-citation Threshold Approval Ratings (H-STAR), Impact Factor Inflation (IFI), and Cited Article Influence (CAI). As data are not available for 5YIF, Article Influence and CAI for 13 of the leading 34 journals considered, 10 RAMs are analysed for 21 highly-cited journals in Finance. The harmonic mean of the ranks of the 10 RAMs for the 34 highly-cited journals are also presented. It is shown that emphasizing the 2-year impact factor of a journal, which partly answers the question as to When published papers are cited, to the exclusion of other informative RAMs, which answer Where and How (frequently) published papers are cited, can lead to a distorted evaluation of journal impact and influence relative to the Harmonic Mean rankings. A linear regression model is used to forecast expert scores on the basis of RAMs that capture journal impact, journal policy, the number of high quality papers, and quantitative information about a journal. The robustness of the rankings is also analysed.
    Keywords: Expert scores, Journal quality, RAMs, Impact factor, IFI, C3PO, PI-BETA, STAR, Eigenfactor, Article Influence, h-index, harmonic mean, robustness.
    JEL: C18 C81 C83
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:851&r=for
  14. By: Andrea Bastianin; Marzio Galeotti; Matteo Manera
    Abstract: We analyze the relationship between the prices of ethanol, agricultural commodities and livestock in Nebraska, the U.S. second largest ethanol producer. The paper focuses on long-run relations and Granger causality linkages between ethanol and the other commodities. The analysis takes possible structural breaks into account and uses a set of techniques that allow to draw inferences about the existence of long-run relations and of short-run in-sample Granger causality and out-ofsample predictive ability. Even after taking breaks into account, evidence that the price of ethanol drives the price dynamics of the other commodities is extremely weak. It is concluded that, on the basis of a formal, comprehensive and rigorous causality analysis we do not find evidence in favour of the Food versus Fuel debate.
    Keywords: Ethanol, Field Crops, Granger Causality, Forecasting, Structural Breaks
    JEL: C22 C53 Q13 Q42 Q47
    Date: 2013–03
    URL: http://d.repec.org/n?u=RePEc:mib:wpaper:239&r=for
  15. By: Phan, Tuan
    Abstract: As a particular form of transparency, nowadays some central banks publish their interest rate forecasts while many others refuse to do that. Whether the publication is good or bad for economic performance and social welfares is now a hotly debatable subject. This paper provides a review of the literature in both theoretical and empirical aspects. We also establish a criteria table which could be used as a preliminary guideline for central banks in answering the question whether they should reveal the forecasts, and how to publish the policy rate inclinations. The suggested conclusion is that interest rate projections should be considered as one of the last items that central banks should reveal and they should be very careful in publishing its policy rate forecasts.
    Keywords: Central bank, transparency, interest rate forecasts
    JEL: E58
    Date: 2013–03–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:44676&r=for
  16. By: Anastasios Panagiotelis; Michael S. Smith; Peter J Danaher
    Abstract: In this study we construct a multivariate stochastic model for website visit duration, page views, purchase incidence and the sale amount for online retailers. The model is constructed by composition from parametric distributions that account for consumer heterogeneity, and involves copula components. Our model is readily estimated using full maximum likelihood, allows for the strong nonlinear relationships between the sales and visit variables to be explored in detail, and can be used to construct sales predictions. We examine a number of top-ranked U.S. online retailers, and find that the visit duration and the number of pages viewed are both related to sales, but in very different ways for different products. Using Bayesian methodology we show how the model can be extended to account for latent household segments, further accounting for consumer heterogeneity. The model can also be adjusted to accommodate a more accurate analysis of online retailers like apple.com that sell products at a very limited number of price points. In a validation study across a range of different websites, we find that the purchase incidence and sales amount are both forecast more accurately using our stochastic model, when compared to regression, probit regression and a popular data-mining method.
    Keywords: Online purchasing, panel data, copulas, marketing models
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2013-5&r=for
  17. By: Bhaduri, Saumitra; Sethudurai, Raja
    Abstract: This paper tries to identify non-linearity in the estimation of Taylor type reaction function for Reserve Bank of India using a threshold estimation technique of Hansen (2000). For the monthly data from March 2001 to October 2009 with Repo rate as the policy rate the estimation significantly identifies two thresholds with inflation and one threshold with output gap as threshold variables. We compared this model with that of a naïve univariate model and the typical Taylor type reaction function, the results are in support of the non-linear model in predicting the repo rate at turning points with more accuracy than the other two competing models.
    Keywords: Policy reaction function, threshold estimation, Taylor rule
    JEL: E5 E52 E58
    Date: 2013–03–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:44844&r=for
  18. By: Mohamed El Hedi Arouri (EconomiX - CNRS : UMR7166 - Université Paris X - Paris Ouest Nanterre La Défense); Shawkat Hammoudeh (CERAG - Centre d'études et de recherches appliquées à la gestion - CNRS : UMR5820 - Université Pierre Mendès-France - Grenoble II); Amine Lahiani (LEO - Laboratoire d'économie d'Orleans - CNRS : UMR6221 - Université d'Orléans); Duc Khuong Nguyen (CERAG - Centre d'études et de recherches appliquées à la gestion - CNRS : UMR5820 - Université Pierre Mendès-France - Grenoble II)
    Abstract: We investigate the potential of structural changes and long memory (LM) properties in returns and volatility of the four major precious metal commodities traded on the COMEX markets (gold, silver, platinum and palladium). Broadly speaking, a random variable is said to exhibit long memory behavior if its autocorrelation function is not integrable, while structural changes can induce sudden and significant shifts in the time-series behavior of that variable. The results from implementing several parametric and semiparametric methods indicate strong evidence of long range dependence in the daily conditional return and volatility processes for the precious metals. Moreover, for most of the precious metals considered, this dual long memory is found to be adequately captured by an ARFIMA-FIGARCH model, which also provides better out-of-sample forecast accuracy than several popular volatility models. Finally, evidence shows that conditional volatility of precious metals is better explained by long memory than by structural breaks.
    Date: 2013–03–07
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00798033&r=for

This nep-for issue is ©2013 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.