nep-for New Economics Papers
on Forecasting
Issue of 2013‒06‒16
seventeen papers chosen by
Rob J Hyndman
Monash University

  1. Forecasting using a large number of predictors: Bayesian model averaging versus principal components regression By Rachida Ouysse
  2. What Central Bankers Need to Know about Forecasting Oil Prices By Christiane Baumeister; Lutz Kilian
  3. Modeling and Forecasting the Volatility of Energy Forward Returns - Evidence from the Nordic Power Exchange By Asger Lunde; Kasper V. Olesen
  4. Forecasting Day-Ahead Electricity Prices: Utilizing Hourly Prices By Eran Raviv; Kees E. Bouwman; Dick van Dijk
  5. Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression By Peter Exterkate; Patrick J.F. Groenen; Christiaan Heij; Dick van Dijk
  6. GFC-Robust Risk Management under the Basel Accord using Extreme Value Methodologies By Juan-Angel Jimenez-Martin; Michael McAleer; Teodosio Perez Amaral; Paulo Araujo Santos
  7. Do Agents Learn by Least Squares? The Evidence Provided by Changes in Monetary Policy By Sagarika Mishra
  8. Does the choice of estimator matter when forecasting returns? By Joakim Westerlund; Paresh K Narayan
  9. Interest Rates with Long Memory: A Generalized Affine Term-Structure Model By Daniela Osterrieder
  10. Modelling and Simulation: An Overview By Michael McAleer; Felix Chan; Les Oxley
  11. Forecasting Value-at-Risk using Block Structure Multivariate Stochastic Volatility Models By Manabu Asai; Massimiliano Caporin; Michael McAleer
  12. The Bank of England's forecasting platform: COMPASS, MAPS, EASE and the suite of models By Burgess, Stephen; Fernandez-Corugedo, Emilio; Groth, Charlotta; Harrison, Richard; Monti, Francesca; Theodoridis, Konstantinos; Waldron, Matt
  13. Rules of Thumb for Banking Crises in Emerging Markets By Paolo Manasse; Roberto Savona; Marika Vezzoli
  14. Projecting Long-Term Primary Energy Consumption By Zsuzsanna Csereklyei; Stefan Humer
  15. Vote Self-Prediction Hardly Predicts Who Will Vote, and Is (Misleadingly) Unbiased By Rogers, Todd; Aida, Masa
  16. Disentangling Continuous Volatility from Jumps in Long-Run Risk-Return Relationships By Éric Jacquier; Cédric Okou
  17. Are Sunspots Learnable? An Experimental Investigation in a Simple General-Equilibrium Model By Jasmina Arifovic; George Evans; Olena Kostyshyna

  1. By: Rachida Ouysse (School of Economics, the University of New South Wales)
    Abstract: We study the performance of Bayesian model averaging as a forecasting method for a large panel of time series and compare its performance to principal components regression (PCR). We show empirically that these forecasts are highly correlated implying similar mean-square forecast errors. Applied to forecasting Industrial production and in ation in the United States, we find that the set of variables deemed informative changes over time which suggest temporal instability due to collinearity and to the of Bayesian variable selection method to minor perturbations of the data. In terms of mean-squared forecast error, principal components based forecasts have a slight marginal advantage over BMA. However, this marginal edge of PCR in the average global out-of-sample performance hides important changes in the local forecasting power of the two approaches. An analysis of the Theil index indicates that the loss of performance of PCR is due mainly to its exuberant biases in matching the mean of the two series especially the in ation series. BMA forecasts series matches the first and second moments of the GDP and in ation series very well with practically zero biases and very low volatility. The fluctuation statistic that measures the relative local performance shows that BMA performed consistently better than PCR and the naive benchmark (random walk) over the period prior to 1985. Thereafter, the performance of both BMA and PCR was relatively modest compared to the naive benchmark.
    Date: 2013–04
    URL: http://d.repec.org/n?u=RePEc:swe:wpaper:2013-04&r=for
  2. By: Christiane Baumeister; Lutz Kilian
    Abstract: Forecasts of the quarterly real price of oil are routinely used by international organizations and central banks worldwide in assessing the global and domestic economic outlook, yet little is known about how best to generate such forecasts. Our analysis breaks new ground in several dimensions. First, we address a number of econometric and data issues specific to real-time forecasts of quarterly oil prices. Second, we develop real-time forecasting models not only for U.S. benchmarks such as West Texas Intermediate crude oil, but we also develop forecasting models for the price of Brent crude oil, which has become increasingly accepted as the best measure of the global price of oil in recent years. Third, we design for the first time methods for forecasting the real price of oil in foreign consumption units rather than U.S. consumption units, taking the point of view of forecasters outside the United States. In addition, we investigate the costs and benefits of allowing for time variation in vector autoregressive (VAR) model parameters and of constructing forecast combinations. We conclude that quarterly forecasts of the real price of oil from suitably designed VAR models estimated on monthly data generate the most accurate forecasts among a wide range of methods including forecasts based on oil futures prices, no-change forecasts and forecasts based on regression models estimated on quarterly data.
    Keywords: Econometric and statistical methods; International topics
    JEL: Q43 C53 E32
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:13-15&r=for
  3. By: Asger Lunde (Aarhus University and CREATES); Kasper V. Olesen (Aarhus University and CREATES)
    Abstract: We explore the structure of transaction records from NASDAQ OMX Commodities Europe back to 2006 and analyze base load forwards with the Nordic system price on electric power as reference. Following a discussion of the appropriate rollover scheme we incorporate selected realizedmeasures of volatility in a Realized EGARCH framework for the joint modeling of returns and realized measures of volatility. Conditional variances are shown to vary over time, which stresses the importance of portfolio reallocation for risk management and other purposes. We document gains from utilizing data at higher frequencies by comparing to ordinary EGARCH models that are nested in the Realized EGARCH. We obtain improved fit, in-sample as well as out-of-sample. In-sample in terms of improved loglikelihood and out-of-sample in terms of 1-, 5-, and 20-step-ahead regular and bootstrapped rolling-window forecasts. The Realized EGARCH forecasts are statistically superior to ordinary EGARCH forecasts.
    Keywords: Financial Volatility, Realized GARCH, High Frequency Data, Electricity, Power, Forecasting, Realized Variance, Realized Kernel, Model Confidence Set
    JEL: C10 C22 C53 C58 C80
    Date: 2013–05–24
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-19&r=for
  4. By: Eran Raviv (Erasmus University Rotterdam); Kees E. Bouwman (Erasmus University Rotterdam); Dick van Dijk (Erasmus University Rotterdam)
    Abstract: The daily average price of electricity represents the price of electricity to be delivered over the full next day and serves as a key reference price in the electricity market. It is an aggregate that equals the average of hourly prices for delivery during each of the 24 individual hours. This paper demonstrates that the disaggregated hourly prices contain useful predictive information for the daily average price. Multivariate models for the full panel of hourly prices significantly outperform univariate models of the daily average price, with reductions in Root Mean Squared Error of up to 16%. Substantial care is required in order to achieve these forecast improvements. Rich multivariate models are needed to exploit the relations between different hourly prices, but the risk of overfitting must be mitigated by using dimension reduction techniques, shrinkage and forecast combinations.
    Keywords: Electricity market, Forecasting, Hourly prices, Dimension reduction, Shrinkage, Forecast combinations
    JEL: C53 C32 Q47
    Date: 2013–05–17
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20130068&r=for
  5. By: Peter Exterkate (Aarhus University and CREATES); Patrick J.F. Groenen (Econometric Institute, Erasmus University Rotterdam); Christiaan Heij (Econometric Institute, Erasmus University Rotterdam); Dick van Dijk (Econometric Institute, Erasmus University Rotterdam)
    Abstract: This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predictive regression model is based on a shrinkage estimator to avoid overfitting. We extend the kernel ridge regression methodology to enable its use for economic time-series forecasting, by including lags of the dependent variable or other individual variables as predictors, as typically desired in macroeconomic and financial applications. Monte Carlo simulations as well as an empirical application to various key measures of real economic activity confirm that kernel ridge regression can produce more accurate forecasts than traditional linear and nonlinear methods for dealing with many predictors based on principal component regression.
    Keywords: High dimensionality, nonlinear forecasting, ridge regression, kernel methods.
    JEL: C53 C63 E27
    Date: 2013–05–30
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-16&r=for
  6. By: Juan-Angel Jimenez-Martin (Complutense University of Madrid, Spain); Michael McAleer (Complutense University of Madrid, Spain, Erasmus School of Economics, Erasmus University Rotterdam, The Netherlands, and Kyoto University, Japan); Teodosio Perez Amaral (Complutense University of Madrid, Spain); Paulo Araujo Santos (University of Lisbon, Portugal)
    Abstract: In this paper we provide further evidence on the suitability of the median of the point VaR forecasts of a set of models as a GFC-robust strategy by using an additional set of new extreme value forecasting models and by extending the sample period for comparison. These extreme value models include DPOT and Conditional EVT. Such models might be expected to be useful in explaining financial data, especially in the presence of extreme shocks that arise during a GFC. Our empirical results confirm that the median remains GFC-robust even in the presence of these new extreme value models. This is illustrated by using the S&P500 index before, during and after the 2008-09 GFC. We investigate the performance of a variety of single and combined VaR forecasts in terms of daily capital requirements and violation penalties under the Basel II Accord, as well as other criteria, including several tests for independence of the violations. The strategy based on the median, or more generally, on combined forecasts of single models, is straightforward to incorporate into existing computer software packages that are used by banks and other financial institutions.
    Keywords: Value-at-Risk (VaR), DPOT, daily capital charges, robust forecasts, violation penalties, optimizing strategy, aggressive risk management, conservative risk management, Basel, global financial crisis
    JEL: G32 G11 G17 C53 C22
    Date: 2013–05–21
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20130070&r=for
  7. By: Sagarika Mishra (Deakin University)
    Abstract: Understanding how agents formulate their expectations about Fed behavior is critical for the design of monetary policy. In response to a lack of empirical support for a strict rationality assumption, monetary theorists have recently introduced learning by agents into their models. Although a learning assumption is now common, there is practically no empirical research on whether agents actually earn. In this paper we test if the forecast of the three month T-bill rate in the Survey of Professional Forecasters (SPF) is consistent with least squares learning when there are discrete shifts in monetary policy. Discrete shifts in policy introduce temporary biases into forecasts while agents process data and learn about the policy shift. We first derive the mean, variance and autocovariances of the forecast errors from a recursive least squares learning algorithm when there are breaks in the structure of the model. We then apply the Bai and Perrron (1998) test for structural change to a Taylor rule and a forecasting model for the three month T-bill rate in order to identify changes in monetary policy. Having identified the policy regimes, we then estimate the implied biases in the interest rate forecasts within each regime. We find that when the forecast errors from the SPF are corrected for the biases due to shifts in policy, the forecast are consistent with least squares learning.
    Keywords: Survey forecasts, Least Squares Learning
    JEL: D83 D84
    URL: http://d.repec.org/n?u=RePEc:dkn:ecomet:fe_2012_09&r=for
  8. By: Joakim Westerlund (Deakin University); Paresh K Narayan (Deakin University)
    Abstract: While the literature concerned with the predictability of stock returns is huge, surprisingly little is known when it comes to role of the choice of estimator of the predictive regression. Ideally, the choice of estimator should be rooted in the salient features of the data. In case of predictive regressions of returns there are at least three such features; (i) returns are heteroskedastic, (ii) predictors are persistent, and (iii) regression errors are correlated with predictor innovations. In this paper we examine if the accounting of these features in the estimation process has any bearing on our ability to forecast future returns. The results suggest that it does.
    Keywords: Predictive regression; Stock return predictability; Heteroskedasticity; Predictor endogeneity
    JEL: C22 C23 G1 G12
    Date: 2012–05–11
    URL: http://d.repec.org/n?u=RePEc:dkn:ecomet:fe_2012_01&r=for
  9. By: Daniela Osterrieder (Aarhus University and CREATES)
    Abstract: We propose a model for the term structure of interest rates that is a generalization of the discrete-time, Gaussian, affine yield-curve model. Compared to standard affine models, our model allows for general linear dynamics in the vector of state variables. In an application to real yields of U.S. government bonds, we model the time series of the state vector by means of a co-fractional vector autoregressive model. The implication is that yields of all maturities exhibit nonstationary, yet mean-reverting, long-memory behavior of the order d ˜ 0.87. The long-run dynamics of the state vector are driven by a level, a slope, and a curvature factor that arise naturally from the co-fractional modeling framework. We show that implied yields match the level and the variability of yields well over time. Studying the out-of-sample forecasting accuracy of our model, we find that our model results in good yield forecasts that outperform several benchmark models, especially at long forecasting horizons.
    Keywords: term structure of interest rates, fractional integration and cointegration, affine models
    JEL: G12 C32 C58
    Date: 2013–05–30
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-17&r=for
  10. By: Michael McAleer (Erasmus University Rotterdam, The Netherlands, Complutense University of Madrid, Spain, and Kyoto University, Japan); Felix Chan (Curtin University, Australia); Les Oxley (University of Waikato, New Zealand)
    Abstract: The papers in this special issue of Mathematics and Computers in Simulation cover the following topics: improving judgmental adjustment of model-based forecasts, whether forecast updates are progressive, on a constrained mixture vector autoregressive model, whether all estimators are born equal: the empirical properties of some estimators of long memory, characterising trader manipulation in a limit-order driven market, measuring bias in a term-structure model of commodity prices through the comparison of simultaneous and sequential estimation, modelling tail credit risk using transition matrices, evaluation of the DPC-based inclusive payment system in Japan for cataract operations by a new model, the matching of lead underwriters and issuing firms in the Japanese corporate bond market, stochastic life table forecasting: a time-simultaneous fan chart application, adaptive survey designs for sampling rare and clustered populations, income distribution inequality, globalization, and innovation: a general equilibrium simulation, whether exchange rates affect consumer prices: a comparative analysis for Australia, China and India, the impacts of exchange rates on Australia's domestic and outbound travel markets, clean development mechanism in China: regional distribution and prospects, design and implementation of a Web-based groundwater data management system, the impact of serial correlation on testing for structural change in binary choice model: Monte Carlo evidence, and coercive journal self citations, impact factor, journal influence and article influence.
    Keywords: Modelling, simulation, forecasting, time series models, trading, credit risk, empirical finance, health economics, sampling, groundwater systems, exchange rates, structural change, citations
    JEL: C15 C63 E27 E37 E47 F37 F47
    Date: 2013–05–21
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20130069&r=for
  11. By: Manabu Asai (Soka University, Japan); Massimiliano Caporin (University of Padova, Italy); Michael McAleer (Erasmus University Rotterdam, The Netherlands, Complutense University of Madrid, Spain, and Kyoto University, Japan)
    Abstract: Most multivariate variance or volatility models suffer from a common problem, the “curse of dimensionality”. For this reason, most are fitted under strong parametric restrictions that reduce the interpretation and flexibility of the models. Recently, the literature has focused on multivariate models with milder restrictions, whose purpose is to combine the need for interpretability and efficiency faced by model users with the computational problems that may emerge when the number of assets can be very large. We contribute to this strand of the literature by proposing a block-type parameterization for multivariate stochastic volatility models. The empirical analysis on stock returns on the US market shows that 1% and 5 % Value-at-Risk thresholds based on one-step-ahead forecasts of covariances by the new specification are satisfactory for the period including the Global Financial Crisis.
    Keywords: block structures; multivariate stochastic volatility; curse of dimensionality; leverage effects; multi-factors; heavy-tailed distribution
    JEL: C32 C51 C10
    Date: 2013–05–27
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20130073&r=for
  12. By: Burgess, Stephen (Bank of England); Fernandez-Corugedo, Emilio (International Monetary Fund); Groth, Charlotta (Zurich Insurance Group); Harrison, Richard (Bank of England); Monti, Francesca (Bank of England); Theodoridis, Konstantinos (Bank of England); Waldron, Matt (Bank of England)
    Abstract: This paper introduces the Bank of England's new forecasting platform and provides examples of how it can be applied to practical forecasting problems. The platform consists of four components: COMPASS, a structural central organising model; a suite of models, used to fill in the gaps in the economics of COMPASS and provide cross-checks on the forecast; MAPS, a macroeconomic modelling and projection toolkit; and EASE, a user interface. The platform has been in use since the end of 2011 in support of production of the projections produced for the Monetary Policy Committee’s quarterly Inflation Reports. In this paper we provide a full description of COMPASS, including discussion of its estimation and its properties. We also illustrate how the suite of models can be used to mitigate some of the trade-offs inherent in building a projection with a central organising model such as COMPASS, and discuss the role of the suite in addressing problems of model misspecification.
    Keywords: Forecasting; macro-modelling; misspecification
    JEL: E17 E20 E30 E40 E50
    Date: 2013–05–17
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0471&r=for
  13. By: Paolo Manasse; Roberto Savona; Marika Vezzoli
    Abstract: This paper employs a recent statistical algorithm (CRAGGING) in order to build an early warning model for banking crises in emerging markets. We perturb our data set many times and create “artificial” samples from which we estimated our model, so that, by construction, it is flexible enough to be applied to new data for out-of-sample prediction. We find that, out of a large number (540) of candidate explanatory variables, from macroeconomic to balance sheet indicators of the countries’ financial sector, we can accurately predict banking crises by just a handful of variables. Using data over the period from 1980 to 2010, the model identifies two basic types of banking crises in emerging markets: a “Latin American type”, resulting from the combination of a (past) credit boom, a flight from domestic assets, and high levels of interest rates on deposits; and an “Asian type”, which is characterized by an investment boom financed by banks’ foreign debt. We compare our model to other models obtained using more traditional techniques, a Stepwise Logit, a Classification Tree, and an “Average” model, and we find that our model strongly dominates the others in terms of out-of-sample predictive power. JEL: E44, G01, G21 Keywords: Banking Crises, Early Warnings, Regression and Classification Trees, Stepwise Logit
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:igi:igierp:481&r=for
  14. By: Zsuzsanna Csereklyei (Department of Economics, Vienna University of Economics and Business); Stefan Humer (Department of Economics, Vienna University of Economics and Business)
    Abstract: In this paper we use the long-term empirical relationship among primary energy consumption, real income, physical capital, population and technology, obtained by averaged panel error correction models, to project the long-term primary energy consumption of 56 countries up to 2100. In forecasting long-term primary energy consumption, we work with four different Shared Socioeconomic Pathway Scenarios (SSPs) developed for the Intergovernmental Panel on Climate Change (IPCC) framework, assuming different challenges to adaptation and mitigation. We find that in all scenarios, China, the United States and India will be the largest energy consumers, while highly growing countries will also significantly contribute to energy use. We observe for most scenarios a sharp increase in global energy consumption, followed by a levelling-out and a decrease towards the second half of the century. The reasons behind this pattern are not only slower population growth, but also infrastructure saturation and increased total factor productivity. This means, as countries move towards more knowledge based societies, and higher energy efficiency, their primary energy usage is likely to decrease as a result. Global primary energy consumption is expected however to increase significantly in the coming decades, thus increasing the pressure on policy makers to cope with the questions of energy security and greenhouse gas mitigation at the same time.
    Keywords: Primary Energy Demand, Projections, Panel Cointegration, Model Averaging
    JEL: C53 Q43 Q47
    Date: 2013–05
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwwuw:wuwp152&r=for
  15. By: Rogers, Todd (Harvard University and Analyst Institute, Washington, DC); Aida, Masa (Greenberg Quinlan Rosner Research)
    Abstract: Public opinion researchers, campaigns, and political scientists often rely on self-predicted vote to measure political engagement, allocate resources, and forecast turnout. Despite its importance, little research has examined the accuracy of self-predicted vote responses. Seven pre-election surveys with post-election vote validation from three elections (N = 29,403) reveal several patterns. First, many self-predicted voters do not actually vote (flake-out). Second, many self-predicted nonvoters do actually vote (flake-in). This is the first robust measurement of flake-in. Third, actual voting is more accurately predicted by past voting (from voter file or recalled) than by self-predicted voting. Finally, self-predicted voters differ from actual voters demographically. Actual voters are more likely to be white (and not black), older, and partisan than actual nonvoters (i.e., participatory bias), but self-predicted voters and self-predicted nonvoters do not differ much. Vote self-prediction is "biased" in that it misleadingly suggests that there is no participatory bias.
    Date: 2013–04
    URL: http://d.repec.org/n?u=RePEc:ecl:harjfk:rwp13-010&r=for
  16. By: Éric Jacquier; Cédric Okou
    Abstract: Realized variance can be broken down into continuous volatility and jumps. We show that these two components have very different predictive powers on future long-term excess stock market returns. While continuous volatility is a key driver of medium to long-term risk-return relationships, jumps do not predict future medium- to long-term excess returns. We use inference methods robust to persistent predictors in a multi-horizon setup. That is, we use a rescaled Student-t to test for significant risk-return links, give asymptotic arguments and simulate its exact behavior under the null in the case of multiple regressors with different degrees of persistence. Then, with Wald tests of equality of the risk-return relationship at multiple horizons, we find no evidence against a proportional relationship, constant across horizons, between long-term continuous volatility and future returns. Two by-products of our analysis are that imposing model-based constraints on long term regressions can improve their efficiency, and short-run estimates are sensitive to short-term variability of the predictors. <P>
    Keywords: predictability, realized variance, continuous volatility, jumps, long-run returns, persistent regressor,
    Date: 2013–06–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2013s-14&r=for
  17. By: Jasmina Arifovic; George Evans; Olena Kostyshyna
    Abstract: We conduct experiments with human subjects in a model with a positive production externality in which productivity is a non-decreasing function of the average level of employment of other firms. The model has three steady states: the low and high steady states are expectationally stable (E-stable), and thus locally stable under learning, while the middle steady state is not E-stable. There also exists a locally E-stable sunspot equilibrium that fluctuates between the high and low steady states. Steady states are payoff ranked: low values give lower profits than higher values. We investigate whether subjects in our experimental economies can learn a sunspot equilibrium. Our experimental design has two treatments: one in which payoff is based on the firm’s profits, and the other in which payoff is based on the forecast squared error. We observe coordination on the extrinsic announcements in both treatments. In the treatments with forecast squared error, the average employment and average forecasts of subjects are closer to the equilibrium corresponding to the announcement. Cases of apparent convergence to the low and high steady states are also observed.
    Keywords: Economic models
    JEL: D83 G20
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:13-14&r=for

This nep-for issue is ©2013 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.