nep-for New Economics Papers
on Forecasting
Issue of 2011‒11‒07
eighteen papers chosen by
Rob J Hyndman
Monash University

  1. A Markov-switching Multifractal Approach to Forecasting Realized Volatility By Thomas Lux; Leonardo Morales-Arias; Cristina Sattarhoff
  2. Direct vs bottom-up approach when forecasting GDP: reconciling literature results with institutional practice By Paulo Soares Esteves
  3. A medium scale forecasting model for monetary policy By Kenneth Beauchemin; Saeed Zaman
  4. Forecasting and tracking real-time data revisions in inflation persistence By Tierney, Heather L.R.
  5. Optimal Forecasts in the Presence of Structural Breaks By Pesaran, M.H.; Pick, A.; Pranovich, M.
  6. Do experts incorporate statistical model forecasts and should they? By Legerstee, R.; Franses, Ph.H.B.F.; Paap, R.
  7. Do experts' SKU forecasts improve after feedback? By Legerstee, R.; Franses, Ph.H.B.F.
  8. EFFICIENT INTEREST RATECURVE ESTIMATION AND FORECASTING IN BRAZIL By Joao Frois Caldeira; Guilherme Valle Moura; Marcelo Savino Portugal
  9. Modeling and Forecasting Interval Time Series with Threshold Models: An Application to S&P500 Index Returns By Paulo M.M. Rodrigues; Nazarii Salish
  10. Forecasting inflation with consumer survey data – application of multi-group confirmatory factor analysis to elimination of the general sentiment factor By Piotr Białowolski
  11. Modeling Repeat Purchases in the Internet when RFM Captures Past Influence of Marketing By Reimer, Kerstin; Albers, Sönke
  12. Can Oil Prices Forecast Exchange Rates? By Ferraro, Domenico; Rogoff, Kenneth; Rossi, Barbara
  13. Comparing forecast performances among volatility estimation methods in the pricing of european type currency options of USD-TL and Euro-TL By Gozgor, Giray; Nokay, Pinar
  14. Bootstrap forecast of multivariate VAR models without using the backward representation By Lorenzo Pascual; Esther Ruiz; Diego Fresoli
  15. Voluntary financial disclosure, the introduction of IFRS and long-term communication policy: An empirical test on French firms By Hubert De La Bruslerie; Heger Gabteni
  16. Cost overruns in Swedish transport projects By Lundberg, Mattias; Jenpanitsub, Anchalee
  17. Estimating the impact of the volatility of shocks: a structural VAR approach By Mumtaz, Haroon
  18. A Markov Chain Model for Analysing the Progression of Patient’s Health States By Jonsson, Robert

  1. By: Thomas Lux; Leonardo Morales-Arias; Cristina Sattarhoff
    Abstract: The volatility specification of the Markov-switching Multifractal (MSM) model is proposed as an alternative mechanism for realized volatility (RV). We estimate the RV-MSM model via Generalized Method of Moments and perform forecasting by means of best linear forecasts derived via the Levinson-Durbin algorithm. The out-of-sample performance of the RV-MSM is compared against other popular time series specfications usually employed to model the dynamics of RV as well as other standard volatility models of asset returns. An intra-day data set for five major international stock market indices is used to evaluate the various models out-of-sample. We find that the RV-MSM seems to improve upon forecasts of its baseline MSM counterparts and many other volatility models in terms of mean squared errors (MSE). While the more conventional RV-ARFIMA model comes out as the most successful model (in terms of the number of cases in which it has the best forecasts for all combinations of forecast horizons and criteria), the new RV-MSM model seems often very close in its performance and in a non-negligible number of cases even dominates over the RV-ARFIMA model
    Keywords: Realized volatility, multiplicative volatility models, long memory, international volatility forecasting
    JEL: C20 G12
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:kie:kieliw:1737&r=for
  2. By: Paulo Soares Esteves
    Abstract: How should we forecast GDP? Should we forecast directly the overall GDP or aggregate the forecasts for each of its components using some level of disaggregation? The search for the answer continues to motivate several horse races between these two approaches. Nevertheless, independently of the results, institutions producing shortterm forecasts usually opt for a bottom-up approach. This paper uses an application for the euro area to show that the option between direct and bottom-up approaches as the level of disaggregation chosen by forecasters is not determined by the results of those races.
    JEL: C32 C53 E27
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w201129&r=for
  3. By: Kenneth Beauchemin; Saeed Zaman
    Abstract: This paper presents a 16-variable Bayesian VAR forecasting model of the U.S. economy for use in a monetary policy setting. The variables that comprise the model are selected not only for their effectiveness in forecasting the primary variables of interest, but also for their relevance to the monetary policy process. In particular, the variables largely coincide with those of an augmented New-Keynesian DSGE model. We provide out-of sample forecast evaluations and illustrate the computation and use of predictive densities and fan charts. Although the reduced form model is the focus of the paper, we also provide an example of structural analysis to illustrate the macroeconomic response of a monetary policy shock.
    Keywords: Forecasting ; Monetary policy
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwp:1128&r=for
  4. By: Tierney, Heather L.R.
    Abstract: This paper presents three local nonparametric forecasting methods that are able to utilize the isolated periods of revised real-time PCE and core PCE for 62 vintages within a historic framework with respect to the nonparametric exclusion-from-core inflation persistence model. The flexibility, provided by the kernel and window width, permits the incorporation of the forecasted value into the appropriate time frame. For instance, a low inflation measure can be included in other low inflation time periods in order to form more optimal forecasts by combining values that are similar in terms of metric distance as opposed to chronological time. The most efficient nonparametric forecasting method is the third model, which uses the flexibility of nonparametrics to its utmost by making forecasts conditional on the forecasted value.
    Keywords: Inflation Persistence; Real-Time Data; Monetary Policy; Nonparametrics; Forecasting
    JEL: C53 C14 E52
    Date: 2011–11–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:34439&r=for
  5. By: Pesaran, M.H.; Pick, A.; Pranovich, M.
    Abstract: This paper considers the problem of forecasting under continuous and discrete structural breaks and proposes weighting observations to obtain optimal forecasts in the MSFE sense. We derive optimal weights for continuous and discrete break processes. Under continuous breaks, our approach recovers exponential smoothing weights. Under discrete breaks, we provide analytical expressions for the weights in models with a single regressor and asympotically for larger models. It is shown that in these cases the value of the optimal weight is the same across observations within a given regime and differs only across regimes. In practice, where information on structural breaks is uncertain a forecasting procedure based on robust weights is proposed. Monte Carlo experiments and an empirical application to the predictive power of the yield curve analyze the performance of our approach relative to other forecasting methods.
    JEL: C22 C53
    Date: 2011–10–31
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1163&r=for
  6. By: Legerstee, R.; Franses, Ph.H.B.F.; Paap, R.
    Abstract: Experts can rely on statistical model forecasts when creating their own forecasts.Usually it is not known what experts actually do. In this paper we focus on threequestions, which we try to answer given the availability of expert forecasts andmodel forecasts. First, is the expert forecast related to the model forecast andhow? Second, how is this potential relation influenced by other factors? Third,how does this relation influence forecast accuracy?We propose a new and innovative two-level Hierarchical Bayes model to answerthese questions. We apply our proposed methodology to a large data set offorecasts and realizations of SKU-level sales data from a pharmaceutical company.We find that expert forecasts can depend on model forecasts in a variety ofways. Average sales levels, sales volatility, and the forecast horizon influence thisdependence. We also demonstrate that theoretical implications of expert behavioron forecast accuracy are reflected in the empirical data.
    Keywords: endogeneity;Bayesian analysis;expert forecasts;model forecasts;forecast adjustment
    Date: 2011–09–30
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765026660&r=for
  7. By: Legerstee, R.; Franses, Ph.H.B.F.
    Abstract: We analyze the behavior of experts who quote forecasts for monthlySKU-level sales data where we compare data before and after the momentthat experts received different kinds of feedback on their behavior. Wehave data for 21 experts located in as many countries who make SKUlevelforecasts for a variety of pharmaceutical products for October 2006to September 2007. We study the behavior of the experts by comparingtheir forecasts with those from an automated statistical program, and wereport the forecast accuracy over these 12 months. In September 2007these experts were given feedback on their behavior and they received atraining at the headquartersÂ’ office, where specific attention was given tothe ins and outs of the statistical program. Next, we study the behaviorof the experts for the 3 months after the training session, that is, October2007 to December 2007. Our main conclusion is that in the second periodthe expertsÂ’ forecasts deviated less from the statistical forecasts and thattheir accuracy improved substantially.
    Keywords: expert forecasts;model forecasts;cognitive process feedback;judgmental adjustment;outcome feedback;performance feedback;task properties feedback
    Date: 2011–09–22
    URL: http://d.repec.org/n?u=RePEc:dgr:eureir:1765026656&r=for
  8. By: Joao Frois Caldeira; Guilherme Valle Moura; Marcelo Savino Portugal
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:anp:en2009:133&r=for
  9. By: Paulo M.M. Rodrigues; Nazarii Salish
    Abstract: Over recent years several methods to deal with high-frequency data (economic, financial and other) have been proposed in the literature. An interesting example is for instance interval valued time series described by the temporal evolution of high and low prices of an asset. In this paper a new class of threshold models capable of capturing asymmetric e¤ects in interval-valued data is introduced as well as new forecast loss functions and descriptive statistics of the forecast quality proposed. Least squares estimates of the threshold parameter and the regression slopes are obtained; and forecasts based on the proposed threshold model computed. A new forecast procedure based on the combination of this model with the k nearest neighbors method is introduced. To illustrate this approach, we report an application to a weekly sample of S&P500 index returns. The results obtained are encouraging and compare very favorably to available procedures.<br>
    JEL: C12 C22 C52 C53
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w201128&r=for
  10. By: Piotr Białowolski (Warsaw School of Economics)
    Abstract: This paper (1) examines the properties of survey based households’ inflation expectations and investigates their forecasting performance. With application of the individual data from the State of the Households’ Survey (50 quarters between 1997Q4 and 2010Q1) it was shown that inflation expectations were affected by the consumer sentiment. Multi-Group Confirmatory Factor Analysis (MGCFA) was employed to verify whether a set of proxies provides a reliable basis for measurement of two latent phenomena – consumer sentiment and inflation expectations. Following the steps proposed by Davidov (2008) and Steenkamp and Baumgartner (1998), it appeared that it was possible to specify and estimate a MGCFA model with partial measurement invariance. Thus it was possible to eliminate the influence of consumer sentiment on inflation expectations and at the same time to obtain individually corrected answers concerning the inflation expectations. Additionally, it was shown that the linear relation between consumer sentiment and inflation expectations was stable over time. As a by-product of analysis, it was possible to show that respondents during the financial crisis were much less consistent in their answers to the questions of the consumer questionnaire. In the next step of the analysis, data on inflation expectations were applied to modelling and forecasting inflation. It was shown that with respect to standard ARIMA processes, inclusion of the information on the inflation expectations significantly improved the in-sample and out-of-sample forecasting performance of the time-series models. Especially out-of-sample performance was significantly better as the average absolute error in forecasts of headline and core inflation was reduced by half. It was also shown that models with inflation expectations based on the CFA method (after elimination of the consumer sentiment factor) provided better in-sample forecasts of inflation. Nevertheless, it was not confirmed for the out-of-sample forecasts. (1) Project financed by the National Bank of Poland. Polish title of the project: "Prognozowanie inflacji na podstawie danych koniunktury gospodarstw domowych. Zastosowanie konfirmacyjnej analizy czynnikowej dla wielu grup do oczyszczenia prognoz inflacji z czynnika ogólnego nastroju gospodarczego."
    Keywords: Inflation expectations, Inflation forecasts, Confirmatory Factor Analysis
    JEL: C32 E31 E37
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:nbp:nbpmis:100&r=for
  11. By: Reimer, Kerstin; Albers, Sönke
    Abstract: Predicting online customer repeat purchase behavior by accounting for the marketing-mix plays an important role in a variety of empirical studies regarding individual customer relationship management. A number of sophisticated models have been developed for different forecasting purposes based on a – mostly linear – combination of purchase history, so called Recency-Frequency-Monetary Value (RFM)-variables and marketing variables. However, these studies focus on a high predictive validity rather than ensuring that their proposed models capture the original effects of marketing activities. Thus, they ignore an explicit relationship between the purchase history and marketing which leads to biased estimates in case these variables are correlated. This study develops a modeling framework for the prediction of repeat purchases that adequately combines purchase history data and marketing-mix information in order to determine the original impact of marketing. More specifically, we postulate that RFM already captures the effects of past marketing activities and the original marketing impact is represented by temporal changes from the purchase process. Our analysis highlights and confirms the importance of adequately modeling the relationship between RFM and marketing. In addition, the results show superiority of the proposed model compared to a model with a linear combination of RFM and marketing variables. --
    Keywords: Repeat Purchase Forecasting Models,Marketing Actions,Generalized Bass Model,Media Downloads
    JEL: M3 C4
    Date: 2011–10–25
    URL: http://d.repec.org/n?u=RePEc:zbw:esprep:50730&r=for
  12. By: Ferraro, Domenico; Rogoff, Kenneth; Rossi, Barbara
    Abstract: This paper investigates whether oil prices have a reliable and stable out-of-sample relationship with the Canadian/U.S dollar nominal exchange rate. Despite state-of-the-art methodologies, we find little systematic relation between oil prices and the exchange rate at the monthly and quarterly frequencies. In contrast, the main contribution is to show the existence of a very short-term relationship at the daily frequency, which is rather robust and holds no matter whether we use contemporaneous (realized) or lagged oil prices in our regression. However, in the latter case the predictive ability is ephemeral, mostly appearing after instabilities have been appropriately taken into account.
    Keywords: exchange rates
    JEL: C22 C53 F31 F37
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:8635&r=for
  13. By: Gozgor, Giray; Nokay, Pinar
    Abstract: By using the daily values of USD-TL and Euro-TL denominated European call and put option contracts, which are traded in the over-the-counter market, this study investigates whether there is a significant difference among the premiums of the contracts forecasted by historical volatility, EWMA(l =0.94 andl =0.97), GARCH(1,1) and EGARCH( p, q) models. In order to test the significance of the difference among particular volatility series forecasted by these different methods, test techniques suggested by Diebold and Mariano (1995) and West (1996) are used. Accordingly, the findings indicate that the differences in the pricing of the USD-TL and Euro-TL denominated call-put option contracts are statistically significant for some volatility forecasting methods.
    Keywords: Option Pricing; European Type Vanilla Options; Historical Volatility; Volatility Estimation Models; Forecast Comparison
    JEL: G19
    Date: 2011–01–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:34369&r=for
  14. By: Lorenzo Pascual; Esther Ruiz; Diego Fresoli
    Abstract: In this paper, we show how to simplify the construction of bootstrap prediction densities in multivariate VAR models by avoiding the backward representation. Bootstrap prediction densities are attractive because they incorporate the parameter uncertainty a any particular assumption about the error distribution. What is more, the construction of densities for more than one-step unknown asymptotically. The main advantage of the new simple without loosing the good performance of bootstrap procedures. Furthermore, by avoiding a backward representation, its asymptotic validity can be proved without relying on the assumption of Gaussian errors as proposed in this paper can be implemented to obtain prediction densities in models without a backward representation as, for example, models with MA components or GARCH disturbances. By comparing the finite sample performance of the proposed procedure with those of alternatives, we show that nothing is lost when using it. Finally, we implement the procedure to obtain prediction regions for US quarterly future inflation, unemployment and GDP growth
    Keywords: Non-Gaussian VAR models, Prediction cubes, Prediction density, Prediction regions, Prediction ellipsoids, Resampling methods
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws113426&r=for
  15. By: Hubert De La Bruslerie (DRM - Dauphine Recherches en Management - CNRS : UMR7088 - Université Paris Dauphine - Paris IX); Heger Gabteni (PRISM-Sorbonne - Université Panthéon-Sorbonne - Paris I : EA)
    Abstract: The purpose of this study is to determine if the process of filtering out financial information voluntary disclosed by firms was modified by the introduction of the IFRS. Voluntary information disclosed by French firms during the 2003-2008 period is compiled. This original dataset includes several years both before and after the introduction of the IFRS in the European Union in 2005. We use regression analysis to identify the determinants of the communications policies of listed firms followed in this study. We show that publication score, for some firms, indicates how much useful qualitative information is brought to the market. Particularly, we show that highly communicative firms reduce the information asymmetry as measured by the dispersion of analysts' earning forecasts. The voluntary disclosure of information and earnings forecasts by analysts are endogenous and exhibit a complex two-way relationship. Voluntary communication policies did not change with the introduction of the IFRS.
    Keywords: publication score, voluntary disclosure, financial communication, information policy, IFRS introduction, analysts' forecasts
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-00636602&r=for
  16. By: Lundberg, Mattias (CTS); Jenpanitsub, Anchalee (Mass Rapid Transit Authority of Thailand)
    Abstract: Cost overrun of transport projects is one of the most important problems in transport planning. It also makes the result of the cost-benefit analyses uncertain, thus decreasing their usefulness for decision making. In recent years more emphasis has been put on improving cost calculations and reducing cost overruns, in Sweden and internationally. Still cost overruns have not decreased. We find that the average cost overrun in Swedish road projects is similar to other countries, while it is lower than in other countries for rail. Small projects (< 100 million SEK) have much higher cost overruns than large projects and constitute a large share of total overruns. A project type with large overruns, both in absolute and relative terms, is new rail tracks on existing lines. To improve cost estimates in Sweden, the Successive Calculation method has recently been applied. We find that the variance is significantly lower in these than in actual outcomes, and that the difference is surprisingly small between projects in different planning stages. Another method, Reference Class Forecasting, is demonstrated in two case studies. It results in higher required uplifts. An interesting way forward would be to develop risk-based estimating, based on principal component analysis. To do that, a database needs to be collected, which in turn demands better follow-up procedures.
    Keywords: Cost overrun; cost estimates; actual costs; successive calculation; reference class forecasting
    JEL: R40 R42
    Date: 2011–11–02
    URL: http://d.repec.org/n?u=RePEc:hhs:ctswps:2011_011&r=for
  17. By: Mumtaz, Haroon (Bank of England)
    Abstract: A large empirical literature has examined the transmission mechanism of structural shocks in great detail. The possible role played by changes in the volatility of shocks has largely been overlooked in vector autoregression based applications. This paper proposes an extended vector autoregression where the volatility of structural shocks is allowed to be time-varying and to have a direct impact on the endogenous variables included in the model. The proposed model is applied to US data to consider the potential impact of changes in the volatility of monetary policy shocks. The results suggest that while an increase in this volatility has a statistically significant impact on GDP growth and inflation, the relative contribution of these shocks to the forecast error variance of these variables is estimated to be small.
    Keywords: Vector autoregression; stochastic volatility; particle filter.
    JEL: E30 E32
    Date: 2011–10–31
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0437&r=for
  18. By: Jonsson, Robert (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: Markov chains (MCs) have been used to study how the health states of patients are progressing in time. With few exceptions the studies have been based on the questionable assumptions that the MC has order m=1 and is homogeneous in time. In this paper a three-state non-homogeneous MC model is introduced that allows m to vary. It is demonstrated how wrong assumptions about homogeneity and about the value of m can invalidate predictions of future health states. This can in turn seriously bias a cost-benefit analysis when costs are attached to the predicted outcomes. The present paper only considers problems connected with model construction and estimation. Problems of testing for a proper value of m and of homogeneity is treated in a subsequent paper. Data of work resumption among sick-listed women and men are used to illustrate the theory. A nonhomogeneous MC with m = 2 was well fitted to data for both sexes. The essential difference between the rehabilitation processes for the two sexes was that men had a higher chance to move from the intermediate health state to the state ‘healthy’, while women tended to remain in the intermediate state for a longer time.
    Keywords: Rehabilitation; transition probability; prediction; Maximum Likelihood
    JEL: C10
    Date: 2011–10–31
    URL: http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_006&r=for

This nep-for issue is ©2011 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.