nep-for New Economics Papers
on Forecasting
Issue of 2015‒04‒11
thirteen papers chosen by
Rob J Hyndman
Monash University

  1. Real-Time Forecasting with a MIDAS VAR By Heiner Mikosch; Stefan Neuwirth
  2. Forecasting Financial Market Vulnerability in the U.S.: A Factor Model Approach By Hyeongwoo Kim; Wen Shi
  3. Macroeconomic Forecasting Starting from Survey Nowcasts By João Valle e Azevedo; Inês Gonçalves
  4. Do Phillips curves conditionally help to forecast inflation? By Dotsey, Michael; Fujita, Shigeru; Stark, Tom
  5. Forecasting Moscow Ambulance Trips By Filipp Bykov; Vladimir A. Gordin
  6. Oil Price Forecastability and Economic Uncertainty By Stelios Bekiros; Rangan Gupta; Alessia Paccagnini
  7. A Global Vector Autoregression (GVAR) model for regional labour markets and its forecasting performance with leading indicators in Germany By Schanne, Norbert
  8. Autoregressive moving average infinite hidden markov-switching models By Bauwens, Luc; Carpantier, Jean-François; Dufays, Arnaud
  9. Construction of value-at-risk forecasts under different distributional assumptions within a BEKK framework By Braione, Manuela; Scholtes, Nicolas K.
  10. A 5-sector DSGE model of Russia By Sergey Ivashchenko
  11. Modeling and forecasting persistent financial durations By Zikes, Filip; Barunik, Jozef; Shenai, Nikhil
  12. Forecasting comparison of long term component dynamic models for realized covariance matrices By BAUWENS, Luc; BRAIONE, Manuela; STORTI, Giuseppe
  13. Do Long Memory and Asymmetries Matter When Assessing Downside Return Risk? By Nico Katzke; Chris Garbers

  1. By: Heiner Mikosch (KOF Swiss Economic Institute, ETH Zurich, Switzerland); Stefan Neuwirth (KOF Swiss Economic Institute, ETH Zurich, Switzerland)
    Abstract: This paper presents a MIDAS type mixed frequency VAR forecasting model. First, we propose a general and compact mixed frequency VAR framework using a stacked vector approach. Second, we integrate the mixed frequency VAR with a MIDAS type Almon lag polynomial scheme which is designed to reduce the parameter space while keeping models flexible. We show how to recast the resulting non-linear MIDAS type mixed frequency VAR into a linear equation system that can be easily estimated. A pseudo out-of-sample forecasting exercise with US real-time data yields that mixed frequency VAR substantially improves predictive accuracy upon a standard VAR for different VAR specifications. Forecast errors for, e.g., GDP growth decrease by 30 to 60 percent for forecast horizons up to six months and by around 20 percent for a forecast horizon of one year.
    Keywords: Forecasting, mixed frequency data, MIDAS, VAR, real time
    JEL: C53 E27
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:kof:wpskof:15-377&r=for
  2. By: Hyeongwoo Kim; Wen Shi
    Abstract: This paper presents a factor-based forecasting model for the financial market vulnerability in the U.S. We estimate latent common factors via the method of the principal components from 170 monthly frequency macroeconomic data to out-of-sample forecast the Cleveland Financial Stress Index. Our factor models outperform both the random walk and the autoregressive benchmark models in out-of-sample predictability for short-term forecast horizons, which is a desirable feature since financial crises often come to a surprise realization. Interestingly, the first common factor, which plays a key role in predicting the financial vulnerability index, seems to be more closely related with real activity variables rather than nominal variables. The recursive and the rolling window approaches with a 50% split point perform similarly well.
    Keywords: Financial Stress Index; Method of the Principal Component; Out-of-Sample Forecast; Ratio of Root Mean Square Prediction Error; Diebold-Mariano-West Statistic
    JEL: E44 E47 G01 G17
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:abn:wpaper:auwp2015-04&r=for
  3. By: João Valle e Azevedo; Inês Gonçalves
    Abstract: We explore the use of nowcasts from the Philadelphia Survey of Professional Forecasters as a starting point for macroeconomic forecasting. Specifically, survey nowcasts are treated as anadditional observation of the time series of interest. This simple approach delivers enhanced model performance through the straightforward use of timely information. Important gainsin forecast accuracy are observed for multiple methods/models, especially at shorter horizons.Still, given that survey nowcasts are very hard to beat, this approach proves most useful as a means of developing a sharper forecasting routine for longer-term predictions.
    JEL: C14 C32 C51 C53
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w201502&r=for
  4. By: Dotsey, Michael (Federal Reserve Bank of Philadelphia); Fujita, Shigeru (Federal Reserve Bank of Philadelphia); Stark, Tom (Federal Reserve Bank of Philadelphia)
    Abstract: This paper reexamines the forecasting ability of Phillips curves from both an unconditional and conditional perspective by applying the method developed by Giacomini and White (2006). We find that forecasts from our Phillips curve models tend to be unconditionally inferior to those from our univariate forecasting models. We also find, however, that conditioning on the state of the economy sometimes does improve the performance of the Phillips curve model in a statistically significant manner. When we do find improvement, it is asymmetric -- Phillips curve forecasts tend to be more accurate when the economy is weak and less accurate when the economy is strong. Any improvement we find, however, vanished over the post-1984 period. Supersedes WP 11-40.
    Keywords: Phillips curve; unemployment gap; conditional predictive ability
    JEL: C53 E37
    Date: 2015–03–25
    URL: http://d.repec.org/n?u=RePEc:fip:fedpwp:15-16&r=for
  5. By: Filipp Bykov (National Research University Higher School of Economics); Vladimir A. Gordin (National Research University Higher School of Economics)
    Abstract: This paper presents a method and computational technology for forecasting ambulance trips. We used statistical information about the number of the trips in 2009-2013, the meteorological archive, and the corresponding archive of the meteorological forecasts for the same period. We take into account social and meteorological predictors simultaneously. The method may be used operatively for planning in the ambulance service. It may be applied for all trips and for specific subgroups of diseases. The method and the technology may be applied for any megalopolis if the corresponding medical and meteorological information is available
    Keywords: weather forecasting, trips forecasting, disease, air temperature, correlation function, spline, optimization
    JEL: C32 C52 C53 C61 C63 I1
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:hig:wpaper:36sti2015&r=for
  6. By: Stelios Bekiros; Rangan Gupta; Alessia Paccagnini
    Abstract: Information on economic policy uncertainty (EPU) does matter in predicting oil returns especially when accounting for omitted nonlinearities in the relationship between these two variables via a time-varying coe¢ cient approach. In this work, we compare the forecastability of standard, Bayesian and TVP-VAR models against the random-walk and benchmark AR models. Our results indicate that over the period 1900:1-2014:2 the time-varying VAR model with stochastic volatility outranks all alternative models.
    Keywords: Oil prices, Economic policy uncertainty, Forecasting
    JEL: C22 C32 C53 E60 Q41
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:mib:wpaper:298&r=for
  7. By: Schanne, Norbert (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany])
    Abstract: "It is broadly accepted that two aspects regarding the modeling strategy are essential for the accuracy of forecast: a parsimonious model focusing on the important structures, and the quality of prospective information. Here, we establish a Global VAR framework, a technique that considers a variety of spatio-temporal dynamics in a multivariate setting, that allows for spatially heterogeneous slope coefficients, and that is nevertheless feasible for data without extremely long time dimension. Second, we use this framework to analyse the prospective information regarding the economy due to spatial co-development of regional labour markets in Germany. The predictive content of the spatially interdependent variables is compared with the information content of various leading indicators which describe the general economic situation, the tightness of labour markets and environmental impacts like weather. The forecasting accuracy of these indicators is investigated for German regional labour-market data in simulated forecasts at different horizons and for several periods. Germany turns out to have no economically dominant region (which reflects the polycentric structure of the country). The regions do not follow a joint stable long run trend which could be used to implement cointegration. Accounting for spatial dependence improves the forecast accuracy compared to a model without spatial linkages while using the same leading indicator. Amongst the tested leading indicators, only few produce more accurate forecasts when included in a GVAR model, than the GVAR without indicator. IAB-" (Author's abstract, IAB-Doku) ((en))
    Keywords: Prognosegenauigkeit, Prognosemodell, regionale Faktoren, Indikatorenbildung
    JEL: C23 E24 E27 R12
    Date: 2015–03–30
    URL: http://d.repec.org/n?u=RePEc:iab:iabdpa:201513&r=for
  8. By: Bauwens, Luc (Université catholique de Louvain, CORE, Belgium); Carpantier, Jean-François (CREA, University of Luxembourg); Dufays, Arnaud (Université catholique de Louvain, CORE, Belgium)
    Abstract: Markov-switching models are usually specified under the assumption that all the parameters change when a regime switch occurs. Relaxing this hypothesis and being able to detect which parameters evolve over time is relevant for interpreting the changes in the dynamics of the series, for specifying models parsimoniously, and may be helpful in forecasting. We propose the class of sticky infinite hidden Markov-switching autoregressive moving average models, in which we disentangle the break dynamics of the mean and the variance parameters. In this class, the number of regimes is possibly infinite and is determined when estimating the model, thus avoiding the need to set this number by a model choice criterion. We develop a new Markov chain Monte Carlo estimation method that solves the path dependence issue due to the moving average component. Empirical results on macroeconomic series illustrate that the proposed class of models dominates the model with fixed parameters in terms of point and density forecasts.
    Keywords: ARMA, Bayesian inference, Dirichlet process, Forecasting, Marko v-switching
    JEL: C11 C15 C22 C53 C58
    Date: 2015–02–13
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2015007&r=for
  9. By: Braione, Manuela (Université catholique de Louvain, CORE, Belgium); Scholtes, Nicolas K. (Université catholique de Louvain, CORE, Belgium)
    Abstract: Financial asset returns are known to be conditionally heteroskedastic and generally non-normally distributed, fat-tailed and often skewed. In order to account for both the skewness and the excess kurtosis in returns, we combine the BEKK model from the multivariate GARCH literature with different multivariate densities for the returns. The set of distributions we consider comprises the normal, Student, Multivariate Exponential Power and their skewed counterparts. Applying this framework to a sample of ten assets from the Dow Jones Industrial Average Index, we compare the performance of equally- weighted portfolios derived from the symmetric and skewed distributions in forecasting out-of-sample Value-at-Risk. The accuracy of the VaR forecasts is assessed by implementing standard statistical backtesting procedures. The results unanimously show that the inclusion of fat-tailed densities into the model specification yields more accurate VaR forecasts, while the further addition of skewness does not lead to significant improvements.
    Keywords: Dow Jones industrial average, BEKK model, maximum likelihood, value-at-risk
    JEL: C01 C22 C52 C58
    Date: 2014–11–18
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2014059&r=for
  10. By: Sergey Ivashchenko
    Abstract: We build a dynamic stochastic general equilibrium model with five sectors (1 - mining; 2 - manufacturing; 3 - electricity, gas and water; 4 - trade, transport and communication; 5 - other). The model is estimated on 29 time-series of Russia statistical data. We analyse the out-of-sample forecasting prowess of the model and derive implications for economic policy.
    Keywords: DSGE, industries, out of sample forecasts
    JEL: E23 E27 E32 E37 E60
    Date: 2015–03–06
    URL: http://d.repec.org/n?u=RePEc:eus:wpaper:ec0115&r=for
  11. By: Zikes, Filip; Barunik, Jozef; Shenai, Nikhil
    Abstract: This paper introduces the Markov-Switching Multifractal Duration (MSMD) model by adapting the MSM stochastic volatility model of Calvet and Fisher (2004) to the duration setting. Although the MSMD process is exponential ß-mixing as we show in the paper, it is capable of generating highly persistent autocorrelation. We study analytically and by simulation how this feature of durations generated by the MSMD process propagates to counts and realized volatility. We employ a quasi-maximum likelihood estimator of the MSMD parameters based on the Whittle approximation and establish its strong consistency and asymptotic normality for general MSMD specifications. We show that the Whittle estimation is a computationally simple and fast alternative to maximum likelihood. Finally, we compare the performance of the MSMD model with competing short- and long-memory duration models in an out-of-sample forecasting exercise based on price durations of three major foreign exchange futures contracts. The results of the comparison show that the MSMD and the Long Memory Stochastic Duration model perform similarly and are superior to the short-memory Autoregressive Conditional Duration models.
    Keywords: price durations,long memory,multifractal models,realized volatility,Whittle estimation
    JEL: C13 C58 G17
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:zbw:fmpwps:36&r=for
  12. By: BAUWENS, Luc (Université catholique de Louvain, CORE, Belgium); BRAIONE, Manuela (Université catholique de Louvain, CORE, Belgium); STORTI, Giuseppe (Université di Salerno)
    Abstract: Novel model specifications that include a time-varying long run component in the dynamics of realized covariance matrices are proposed. The adopted modeling framework allows the secular component to enter the model structure either in an additive fashion or as a multiplicative factor, and to be specified parametrically, using a MIDAS filter, or non-parametrically. Estimation is performed by maximizing a Wishart quasi-likelihood function. The one-step ahead forecasting performance of the models is assessed by means of three approaches: the Model Confidence Set, (global) minimum variance portfolios and Value-at-Risk. The results provide evidence in favour of the hypothesis that the proposed models outperform benchmarks incorporating a constant long run component, both in and out-of sample.
    Keywords: Realized covariance, component dynamic models, MIDAS, minimum variance portfolio, Model Confidence Set, Value-at-Risk
    JEL: C13 C32 C58
    Date: 2014–11–30
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2014053&r=for
  13. By: Nico Katzke (Department of Economics, University of Stellenbosch); Chris Garbers (Department of Economics, University of Stellenbosch)
    Abstract: In this paper we set out to test whether, on sector level, returns series in South Africa exhibit long memory and asymmetries and, more specifically, whether these effects should be accounted for when assessing downside risk. The purpose of this analysis is not to identify the most optimal downside risk assessment model or to reaffirm the often regarded stylized fact of long memory and asymmetry in asset returns series. Rather we set out to establish whether accounting for these effects and allowing for more flexibility in second order persistence models lead to improved risk assessments. We use several variants of the widely used GARCH family of second order persistence models that control for these effects, and compare the downside risk estimates using Value-at-Risk measures of these different models and compare their out-of-sample performances. Our findings confirm that controlling for asymmetries and long memory in volatility models improve risk management calculations.
    Keywords: Value-at-Risk, Expected Shortfall, GARCH, Fractional Integration, Kupiec back-testing procedure
    JEL: C22 G13 G17
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:sza:wpaper:wpapers238&r=for

This nep-for issue is ©2015 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.