nep-for New Economics Papers
on Forecasting
Issue of 2013‒12‒29
twenty-two papers chosen by
Rob J Hyndman
Monash University

  1. Asymptotic Inference about Predictive Accuracy Using High Frequency Data By Jia Li; Andrew J. Patton
  2. Comparing variable selection techniques for linear regression: LASSO and Autometrics By Camila Epprecht; Dominique Guegan; Álvaro Veiga
  3. Forecasting Brazilian inflation by its aggregate and disaggregated data: a test of predictive power by forecast horizon By Carlos, Thiago C.; Marçal, Emerson Fernandes
  4. Currency Forecast Errors at Times of Low Interest Rates: Evidence from Survey Data on the Yen/Dollar Exchange Rate By MacDonald, Ronald; Nagayasu, Jun
  5. Model Switching and Model Averaging in Time- Varying Parameter Regression Models By Miguel, Belmonte; Gary, Koop
  6. Dynamic Copula Models and High Frequency Data By Irving Arturo De Lira Salvatierra; Andrew J. Patton
  7. Using VARs and TVP-VARs with Many Macroeconomic Variables By Gary, Koop
  8. Daily House Price Indexes: Construction, Modeling, and Longer-Run Predictions By Tim Bollerslev; Andrew J. Patton; Wang Wenjing
  9. The Impact of Hedge Funds on Asset Markets By Matthias Kruttli; Andrew J. Patton; Tarun Ramadorai
  10. Regime Switching Stochastic Volatility with Skew, Fat Tails and Leverage using Returns and Realized Volatility Contemporaneously By Trojan, Sebastian
  11. Assessing Measures of Order Flow Toxicity via Perfect Trade Classification By Torben G. Andersen; Oleg Bondarenko
  12. Improving prediction of stock market indices by analyzing the psychological states of twitter users By Alexander Porshnev; Ilya Redkin; Alexey Shevchenko
  13. Tales of three budgets: Changes in long-term fiscal projections through the GFC and beyond By Matthew Bell; Paul Rodway
  14. Likelihood inference in non-linear term structure models: the importance of the lower bound By Andreasen, Martin; Meldrum, Andrew
  15. Interventions and inflation expectations in an inflation targeting economy By Pablo Pincheira
  16. Finding the Best Indicators to Identify the Poor By Adama Bah
  17. Reflecting on the VPN Dispute By Torben G. Andersen; Oleg Bondarenko
  18. Working Paper 190 - Early Warning Systems and Systemic Banking Crises in Low Income Countries: A Multinomial Logit Approach By Giovanni Caggiano; Calice Pietro; Leone Leonida
  19. Modeling and predicting the CBOE market volatility index By Fernandes, Marcelo; Medeiros, Marcelo C.; Scharth, Marcel
  20. Adaptive trend estimation in financial time series via multiscale change-point-induced basis recovery By Schröder, Anna Louise; Fryzlewicz, Piotr
  21. Financial and economic downturns in OECD countries By Haavio, Markus; Mendicino , Caterina; Punzi , Maria Teresa
  22. Correlation Dynamics and International Diversification Benefits By Peter Christoffersen; Vihang R. Errunza; Kris Jacobs; Xisong Jin

  1. By: Jia Li; Andrew J. Patton
    Abstract: This paper provides a general framework that enables many existing inference methods for predictive accuracy to be used in applications that involve forecasts of latent target variables. Such applications include the forecasting of volatility, correlation, beta, quadratic variation, jump variation, and other functionals of an underlying continuous-time process. We provide primitive conditions under which a "negligibility" result holds, and thus the asymptotic size of standard predictive accuracy tests, implemented using a high-frequency proxy for the latent variable, is controlled. An extensive simulation study verifies that the asymptotic results apply in a range of empirically relevant applications, and an empirical application to correlation forecasting is presented.
    Keywords: Forecast evaluation, realized variance, volatility, jumps, semimartingale
    JEL: C53 C22 C58 C52 C32
    Date: 2013
  2. By: Camila Epprecht (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon-Sorbonne, Pontifical Catholic University of Rio de Janeiro - Department of Electrical Engineering); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon-Sorbonne); Álvaro Veiga (Pontifical Catholic University of Rio de Janeiro - Department of Electrical Engineering)
    Abstract: In this paper, we compare two different variable selection approaches for linear regression models: Autometrics (automatic general-to-specific selection) and LASSO (ℓ1-norm regularization). In a simulation study, we show the performance of the methods considering the predictive power (forecast out-of-sample) and the selection of the correct model and estimation (in-sample). The case where the number of candidate variables exceeds the number of observation is considered as well. We also analyze the properties of estimators comparing to the oracle estimator. Finally, we compare both methods in an application to GDP forecasting.
    Keywords: Model selection; variable selection; GETS; Autometrics; LASSO; adaptive LASSO; sparse models; oracle property; time series; GDP forecasting
    Date: 2013–11
  3. By: Carlos, Thiago C.; Marçal, Emerson Fernandes
    Abstract: This work aims to compare the forecast efficiency of different types of methodologies applied to Brazilian Consumer inflation (IPCA). We will compare forecasting models using disaggregated and aggregated data over twelve months ahead. The disaggregated models were estimated by SARIMA and will have different levels of disaggregation. Aggregated models will be estimated by time series techniques such as SARIMA, state-space structural models and Markov-switching. The forecasting accuracy comparison will be made by the selection model procedure known as Model Confidence Set and by Diebold-Mariano procedure. We were able to find evidence of forecast accuracy gains in models using more disaggregated data
    Date: 2013–12–09
  4. By: MacDonald, Ronald; Nagayasu, Jun
    Abstract: Using survey expectations data and Markov-switching models, this paper evaluates the characteristics and evolution of investors' forecast errors about the yen/dollar exchange rate. Since our model is derived from the uncovered interest rate parity (UIRP) condition and our data cover a period of low interest rates, this study is also related to the forward premium puzzle and the currency carry trade strategy. We obtain the following results. First, with the same forecast horizon, exchange rate forecasts are homogeneous among different industry types, but within the same industry, exchange rate forecasts differ if the forecast time horizon is different. In particular, investors tend to undervalue the future exchange rate for long term forecast horizons; however, in the short run they tend to overvalue the future exchange rate. Second, while forecast errors are found to be partly driven by interest rate spreads, evidence against the UIRP is provided regardless of the forecasting time horizon; the forward premium puzzle becomes more significant in shorter term forecasting errors. Consistent with this finding, our coefficients on interest rate spreads provide indirect evidence of the yen carry trade over only a short term forecast horizon. Furthermore, the carry trade seems to be active when there is a clear indication that the interest rate will be low in the future.
    Keywords: Currency forecast errors, uncovered interest parity, forward premium puzzle, carry trade, Markov-switching modelate,
    Date: 2013
  5. By: Miguel, Belmonte; Gary, Koop
    Abstract: This paper investigates the usefulness of switching Gaussian state space models as a tool for implementing dynamic model selecting (DMS) or averaging (DMA) in time-varying parameter regression models. DMS methods allow for model switching, where a different model can be chosen at each point in time. Thus, they allow for the explanatory variables in the time-varying parameter regression model to change over time. DMA will carry out model averaging in a time-varying manner. We compare our exact approach to DMA/DMS to a popular existing procedure which relies on the use of forgetting factor approximations. In an application, we use DMS to select different predictors in an in ation forecasting application. We also compare different ways of implementing DMA/DMS and investigate whether they lead to similar results.
    Keywords: Model switching, forecast combination, switching state space model, inflation forecasting,
    Date: 2013
  6. By: Irving Arturo De Lira Salvatierra; Andrew J. Patton
    Abstract: This paper proposes a new class of dynamic copula models for daily asset returns that exploits information from high frequency (intra-daily) data. We augment the generalized autoregressive score (GAS) model of Creal, et al. (2012) with high frequency measures such as realized correlation to obtain a "GRAS" model. We find that the inclusion of realized measures significantly improves the in-sample fit of dynamic copula models across a range of U.S. equity returns. Moreover, we find that out-of-sample density forecasts from our GRAS models are superior to those from simpler models. Finally, we consider a simple portfolio choice problem to illustrate the economic gains from exploiting high frequency data for modeling dynamic dependence.
    Keywords: Realized correlation, realized volatility, dependence, forecasting, tail risk
    JEL: C32 C51 C58
    Date: 2013
  7. By: Gary, Koop
    Abstract: This paper discusses the challenges faced by the empirical macroeconomist and methods for surmounting them. These challenges arise due to the fact that macroeconometric models potentially include a large number of variables and allow for time variation in parameters. These considerations lead to models which have a large number of parameters to estimate relative to the number of observations. A wide range of approaches are surveyed which aim to overcome the resulting problems. We stress the related themes of prior shrinkage, model averaging and model selection. Subsequently, we consider a particular modelling approach in detail. This involves the use of dynamic model selection methods with large TVP-VARs. A forecasting exercise involving a large US macroeconomic data set illustrates the practicality and empirical success of our approach.
    Keywords: Bayesian VAR, forecasting, time-varying coefficients, state-space model,
    Date: 2013
  8. By: Tim Bollerslev; Andrew J. Patton; Wang Wenjing
    Abstract: We construct daily house price indexes for ten major U.S. metropolitan areas. Our calculations are based on a comprehensive database of several million residential property transactions and a standard repeat-sales method that closely mimics the procedure used in the construction of the popular monthly Case-Shiller house price indexes. Our new daily house price indexes exhibit similar characteristics to other daily asset prices, with mild autocorrelation and strong conditional heteroskedasticity, which are well described by a relatively simple multivariate GARCH type model. The sample and model-implied correlations across house price index returns are low at the daily frequency, but rise monotonically with the return horizon, and are all commensurate with existing empirical evidence for the existing monthly and quarterly house price series. A simple model of daily house price index returns produces forecasts of monthly house price changes that are superior to various alternative forecast procedures based on lower frequency data, underscoring the informational advantages of our new more finely sampled daily price series.
    Keywords: real estate, price indices, repeat sales index, high frequency data
    JEL: C43 C22 R30
    Date: 2013
  9. By: Matthias Kruttli; Andrew J. Patton; Tarun Ramadorai
    Abstract: While there has been enormous interest in hedge funds from academics, prospective and current investors, and policymakers, rigorous empirical evidence of their impact on asset markets has been difficult to find. We construct a simple measure of the aggregate illiquidity of hedge fund portfolios, and show that it has strong in- and out-of-sample forecasting power for 72 portfolios of international equities, corporate bonds, and currencies over the 1994 to 2011 period. The forecasting ability of hedge fund illiquidity for asset returns is in most cases greater than, and provides independent information relative to, well-known predictive variables for each of these asset classes. We construct a simple equilibrium model to rationalize our findings, and empirically verify auxiliary predictions of the model.
    Keywords: hedge funds, return predictability, liquidity, equities, bonds, currencies
    JEL: G11 G12 G14 G23
    Date: 2013
  10. By: Trojan, Sebastian
    Abstract: A very general stochastic volatility (SV) model specification with leverage, heavy tails, skew and switching regimes is proposed, using realized volatility (RV) as an auxiliary time series to improve inference on latent volatility. Asymmetry in the observation error is modeled by the Generalized Hyperbolic skew Student-t distribution, whose heavy and light tail enable modeling of substantial skewness. The information content of the range and of implied volatility using the VIX index is also investigated. Up to four regimes are identified from S&P 500 index data using RV as additional time series. Resulting number of regimes and dynamics differ dependent on the auxiliary volatility proxy and are investigated in-sample for the financial crash period 2008/09. An out-of-sample study comparing predictive ability of various model variants for a calm and a volatile period yields insights about the gains on forecasting performance that can be expected by incorporating different volatility proxies into the model. Results indicate that including RV pays off mostly in more volatile market conditions, whereas in calmer environments SV specifications using no auxiliary series appear to be the models of choice. Results for the VIX as a measure of implied volatility point in a similar direction. The range as volatility proxy provides a superior in-sample fit, but its predictive performance is found to be weak.
    Keywords: Stochastic volatility, realized volatility, non-Gaussian and nonlinear state space model, Generalized Hyperbolic skew Student-t distribution, mixing distribution, regime switching, Markov chain Monte Carlo, particle filter
    JEL: C11 C15 C32 C58
    Date: 2013–12
  11. By: Torben G. Andersen (Northwestern University and CREATES); Oleg Bondarenko (University of Illinois at Chicago)
    Abstract: The VPIN, or Volume-synchronized Probability of INformed trading, metric is introduced by Easley, Lopez de Prado and O'Hara (ELO) as a real-time indicator of order flow toxicity. They find the measure useful in predicting return volatility and conclude it may help signal impending market turmoil. The VPIN metric involves decomposing volume into active buys and sells. We use the best-bid-offer (BBO) files from the CME Group to construct (near) perfect trade classification measures for the E-mini S&P 500 futures contract. We investigate the accuracy of the ELO Bulk Volume Classification (BVC) scheme and find it inferior to a standard tick rule based on individual transactions. Moreover, when VPIN is constructed from accurate classification, it behaves in a diametrically opposite way to BVC-VPIN. We also find the latter to have forecast power for short-term volatility solely because it generates systematic classification errors that are correlated with trading volume and return volatility. When controlling for trading intensity and volatility, the BVC-VPIN measure has no incremental predictive power for future volatility. We conclude that VPIN is not suitable for measuring order flow imbalances.
    Keywords: VPIN, Accuracy of Trade Classification, Order Flow Toxicity, Order Imbalance, Volatility Forecasting
    JEL: G01 G12 G14 G17 C58
    Date: 2013–11–25
  12. By: Alexander Porshnev (National Research University Higher School of Economics, Social Science department, N.Novgorod, Russia); Ilya Redkin (National Research University Higher School of Economics, Business Informatics faculty, N.Novgorod, Russia); Alexey Shevchenko (National Research University Higher School of Economics, Business Informatics faculty, N.Novgorod, Russia)
    Abstract: In our paper, we analyze the possibility of improving the prediction of stock market indicators by conducting a sentiment analysis of Twitter posts. We use a dictionary-based approach for sentiment analysis, which allows us to distinguish eight basic emotions in the tweets of users. We compare the results of applying the Support Vector Machine algorithm trained on three sets of data: historical data, historical and “Worry”, “Fear”, “Hope” words count data, historical data and data on the present eight categories of emotions. Our results suggest that the Twitter sentiment analysis data provides additional information and improves prediction as compared to a model based solely on information on previous shifts in stock indicators.
    Keywords: stock market; forecast; Twitter; mood; psychological states; Support Vectors Machine; machine learning/
    JEL: G17 G02
    Date: 2013
  13. By: Matthew Bell; Paul Rodway (The Treasury)
    Abstract: This paper examines fiscal projections based on three consecutive budget forecasts (2009-2011) and provides cautionary insights as to how these projections only a year or two apart can lead to dramatic differences in projected debt levels in the future. Projections of net debt from a Budget 2011 forecast base are much lower, by 2050, than those for Budget 2009. This is largely due to the Budget 2011 forecast base having lower expenditure and higher revenue than the forecast base of the Budget 2009 projections. The paper also underscores how short-term policy changes, if sustained, can make a big difference over the long term and how, over time, the more fundamental structural factors such as demographics can prove to be more durable in influencing fiscal sustainability. Finally, it argues that, even though the level of debt-to-GDP shifts by mid-century, the messages we take from these projections remain the same: spending and possibly tax policies need to change, if we are to avoid passing debt that generates little social return onto our descendants, and early changes alleviate the need for more drastic revisions in the future.
    Keywords: Long-term fiscal projections; Fiscal sustainability
    JEL: H69 H51 H55
    Date: 2013–12
  14. By: Andreasen, Martin (Aarhus University); Meldrum, Andrew (Bank of England)
    Abstract: This paper shows how to use adaptive particle filtering and Markov chain Monte Carlo methods to estimate quadratic term structure models (QTSMs) by likelihood inference. The procedure is applied to a quadratic model for the United States during the recent financial crisis. We find that this model provides a better statistical description of the data than a Gaussian affine term structure model. In addition, QTSMs account perfectly for the lower bound whereas Gaussian affine models frequently imply forecast distributions with negative interest rates. Such predictions appear during the recent financial crisis but also prior to the crisis.
    Keywords: Adaptive particle filtering; Bayesian inference; Higher-order moments; PMCMC; Quadratic term structure models
    JEL: C01 C58 G12
    Date: 2013–12–20
  15. By: Pablo Pincheira
    Abstract: In this paper we explore the role that exchange rate interventions may play in determining inflation expectations in Chile. To that end, we consider a set of nine deciles of inflation expectations coming from the survey of professional forecasters carried out by the Central Bank of Chile. We consider two episodes of preannounced central bank interventions during the sample period 2007-2012.
    Keywords: Exchange rates, inflation expectations, inflation targeting, interventions
    Date: 2013–09
  16. By: Adama Bah
    Abstract: Proxy-means testing (PMT) is a method used to assess household or individual welfare level based on a set of observable indicators. The accuracy, and therefore usefulness of PMT relies on the selection of indicators that produce accurate predictions of household welfare. In this paper I propose a method to identify indicators that are robustly and strongly correlated with household welfare, measured by per capita consumption. From an initial set of 340 candidate variables drawn from the Indonesian Family Life Survey, I identify the variables that contribute most significantly to model predictive performance and that are therefore desirable to be included in a PMT formula. These variables span the categories of household private asset holdings, access to basic domestic energy, education level, sanitation and housing. A comparison of the predictive performance of PMT formulas including 10, 20 and 30 of the best predictors of welfare shows that leads to recommending formulas with 20 predictors. Such parsimonious models have similar predictive performance as the PMT formulas currently used in Indonesia, although these latter are based on models of 32 variables on average.
    Keywords: Proxy-Means Testing, Variable/Model Selection, Targeting, Poverty, social protection
    JEL: I38 C52
    Date: 2013
  17. By: Torben G. Andersen (Northwestern University and CREATES); Oleg Bondarenko (University of Illinois at Chicago)
    Abstract: In Andersen and Bondarenko (2014), using tick data for S&P 500 futures, we establish that the VPIN metric of Easley, Lopez de Prado, and O'Hara (ELO), by construction, will be correlated with trading volume and return volatility (innovations). Whether VPIN is more strongly correlated with volume or volatility depends on the exact implementation. Hence, it is crucial for the interpretation of VPIN as a harbinger of market turbulence or as a predictor of short-term volatility to control for current volume and volatility. Doing so, we find no evidence of incremental predictive power of VPIN for future volatility. Likewise, VPIN does not attain unusual extremes prior to the flash crash. Moreover, the properties of VPIN are strongly dependent on the underlying trade classification. In particular, using more standard classification techniques, VPIN behaves in the exact opposite manner of what is portrayed in ELO (2011a, 2012a). At a minimum, ELO should rationalize this systematic reversal as the classification becomes more closely aligned with individual transactions. ELO (2014) dispute our findings. This note reviews the econometric methodology and the market microstructure arguments behind our conclusions and responds to a number of inaccurate assertions. In addition, we summarize fresh empirical evidence that corroborates the hypothesis that VPIN is largely driven, and significantly distorted, by the volume and volatility innovations. Furthermore, we note there is compelling new evidence that transaction-based classification schemes are more accurate than the bulk volume strategies advocated by ELO for constructing VPIN. In fact, using perfect classification leads to diametrically opposite results relative to ELO (2011a, 2012a).
    Keywords: VPIN, PIN, High-Frequency Trading, Order Flow Toxicity, Order Imbalance, Flash Crash, VIX, Volatility Forecasting
    JEL: G01 G14 G17
    Date: 2013–04–08
  18. By: Giovanni Caggiano; Calice Pietro (African Development Bank); Leone Leonida
    Abstract: This paper estimates an early warning system for predicting systemic banking crises in a sample of low income countries in Sub-Saharan Africa. Since the average duration of crises in this sample of countries is longer than one year, the predictive performance of standard binomial logit models is likely to be hampered by the so-called crisis duration bias. The bias arises from the decision to either treat crisis years after the onset of a crisis as non-crisis years or remove them altogether from the model. To overcome this potential drawback, we propose a multinomial logit approach, which is shown to improve the predictive power compared to the binomial logit model. Our results suggest that crisis events in low income countries are associated with low economic growth, drying up of banking system liquidity and widening of foreign exchange net open positions. JEL Classification: C52, G21, G28, E58.
    Keywords: Banking crises, Systemic risk, Early warning systems, Low income countries, Sub-Saharan Africa, Logit estimation, Financial regulation.
    Date: 2013–12–19
  19. By: Fernandes, Marcelo; Medeiros, Marcelo C.; Scharth, Marcel
    Abstract: This paper performs a thorough statistical examination of the time-series properties of the daily market volatility index (VIX) from the Chicago Board Options Exchange (CBOE). The motivation lies not only on the widespread consensus that the VIX is a barometer of the overall market sentiment as to what concerns investors' risk appetite, but also on the fact that there are many trading strategies that rely on the VIX index for hedging and speculative purposes. Preliminary analysis suggests that the VIX index displays long-range dependence. This is well in line with the strong empirical evidence in the literature supporting long memory in both options-implied and realized variances. We thus resort to both parametric and semiparametric heterogeneous autoregressive (HAR) processes for modeling and forecasting purposes. Our main ndings are as follows. First, we con rm the evidence in the literature that there is a negative relationship between the VIX index and the S&P 500 index return as well as a positive contemporaneous link with thevolume of the S&P 500 index. Second, the term spread has a slightly negative long-run impact in the VIX index, when possible multicollinearity and endogeneity are controlled for. Finally, wecannot reject the linearity of the above relationships, neither in sample nor out of sample. As for the latter, we actually show that it is pretty hard to beat the pure HAR process because of thevery persistent nature of the VIX index.
    Date: 2013–12–09
  20. By: Schröder, Anna Louise; Fryzlewicz, Piotr
    Abstract: Low-frequency financial returns can be modelled as centered around piecewise-constant trend functions which change at certain points in time. We propose a new stochastic time series framework which captures this feature. The main ingredient of our model is a hierarchically-ordered oscillatory basis of simple piecewise-constant functions. It differs from the Fourier-like bases traditionally used in time series analysis in that it is determined by change-points, and hence needs to be estimated from the data before it can be used. The resulting model enables easy simulation and provides interpretable decomposition of nonstationarity into short- and long-term components. The model permits consistent estimation of the multiscale change-point-induced basis via binary segmentation, which results in a variable-span moving-average estimator of the current trend, and allows for short-term forecasting of the average return.
    Keywords: Financial time series, Adaptive trend estimation, Change-point detection, Binary segmentation, Unbalanced Haar wavelets, Frequency-domain modelling
    JEL: C1 C13 C22 C51 C58 G17
    Date: 2013
  21. By: Haavio, Markus (Bank of Finland Research); Mendicino , Caterina (Economic Research Department, Bank of Portugal); Punzi , Maria Teresa (School of Economics, University of Nottingham)
    Abstract: This article empirically studies the linkages between financial variable downturns and economic recessions. We present evidence that real asset prices tend to lead real cycles, while loan-to-GDP and loan-to-deposit ratios lag them. Using a probit analysis, we document that downturns in real asset prices, particularly real house prices, are useful leading indicators of economic recessions.
    Keywords: macro-financial linkages; turning point analysis; probit models
    JEL: C53 E32 E37 G17
    Date: 2013–12–19
  22. By: Peter Christoffersen (University of Toronto); Vihang R. Errunza (McGill University); Kris Jacobs (University of Houston); Xisong Jin (University of Luxembourg)
    Abstract: Forecasting the evolution of security co-movements is critical for asset pricing and portfolio allocation. Hence, we investigate patterns and trends in correlations over time using weekly returns for developed markets (DMs) and emerging markets (EMs) during the period 1973-2012. We show that it is possible to model co-movements for many countries simultaneously using BEKK, DCC, and DECO models. Empirically, we ?find that correlations have significantly trended upward for both DMs and EMs. Based on a time-varying measure of diversification benefit, we ?find that it is not possible in a long-only portfolio to circumvent the increasing correlations by adjusting the portfolio weights over time. However, we do find some evidence that adding EMs to a DM-only portfolio increases diversification benefits.
    Keywords: Asset pricing, asset allocation, dynamic conditional correlation (DCC), dynamic equicorrelation (DECO)
    JEL: G12
    Date: 2013–08–07

This nep-for issue is ©2013 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.