nep-for New Economics Papers
on Forecasting
Issue of 2013‒05‒19
twenty-six papers chosen by
Rob J Hyndman
Monash University

  1. Comparing the Accuracy of Copula-Based Multivariate Density Forecasts in Selected Regions of Support By Cees Diks; Valentyn Panchenko; Oleg Sokolinskiy; Dick van Dijk
  2. Forecasting Interest Rates with Shifting Endpoints By Dick van Dijk; Siem Jan Koopman; Michel van der Wel; Jonathan H. Wright
  3. Prediction Bias Correction for Dynamic Term Structure Models By Eran Raviv
  4. A Forty Year Assessment of Forecasting the Boat Race By Geert Mesters; Siem Jan Koopman
  5. The structure of a machine-built forecasting system By Jiaqi Chen; Michael L. Tindall
  6. Are Forecast Updates Progressive? By Chia-Lin Chang; Philip Hans Franses; Michael McAleer
  7. Forecasting Macroeconomic Variables using Collapsed Dynamic Factor Analysis By Falk Brauning; Siem Jan Koopman
  8. Managing Sales Forecasters By Bert de Bruijn; Philip Hans Franses
  9. Learning, Forecasting and Optimizing: An Experimental Study By Te Bao; John Duffy; Cars Hommes
  10. Time-varying Combinations of Predictive Densities using Nonlinear Filtering By Monica Billio; Roberto Casarin; Francesco Ravazzolo; Herman K. van Dijk
  11. Predicting Time-Varying Parameters with Parameter-Driven and Observation-Driven Models By Siem Jan Koopman; Andre Lucas; Marcel Scharth
  12. Analyzing Fixed-Event Forecast Revisions By Chia-Lin Chang; Bert de Bruijn; Philip Hans Franses; Michael McAleer
  13. Robust Estimation and Forecasting of the Capital Asset Pricing Model By Guorui Bian; Michael McAleer; Wing-Keung Wong
  14. GARCH Models for Daily Stock Returns: Impact of Estimation Frequency on Value-at-Risk and Expected Shortfall Forecasts By David Ardia; Lennart Hoogerheide
  15. Censored Posterior and Predictive Likelihood in Bayesian Left-Tail Prediction for Accurate Value at Risk Estimation By Lukasz Gatarek; Lennart Hoogerheide; Koen Hooning; Herman K. van Dijk
  16. What drives the Quotes of Earnings Forecasters? By Bert de Bruijn; Philip Hans Franses
  17. A New Semiparametric Volatility Model By Jiangyu Ji; Andre Lucas
  18. Forecasting GDP Growth Using Mixed-Frequency Models With Switching Regimes By Fady Barsoum; Sandra Stankiewicz
  19. Parallel Sequential Monte Carlo for Efficient Density Combination: The Deco Matlab Toolbox By Roberto Casarin; Stefano Grassi; Francesco Ravazzolo; Herman K. van Dijk
  20. Behavioral Learning Equilibria By Cars Hommes; Mei Zhu
  21. Has the Basel Accord Improved Risk Management During the Global Financial Crisis? By Michael McAleer; Juan-Ángel Jiménez-Martín; Teodosio Pérez-Amaral
  22. What Do Experts Know About Forecasting Journal Quality? A Comparison with ISI Research Impact in Finance By Chia-Lin Chang; Michael McAleer
  23. Behavioral Heterogeneity in U.S. Inflation Dynamics By Adriana Cornea; Cars Hommes; Domenico Massaro
  24. A Dynamic Bivariate Poisson Model for Analysing and Forecasting Match Results in the English Premier League By Siem Jan Koopman; Rutger Lit
  25. Posterior-Predictive Evidence on US Inflation using Phillips Curve Models with Non-Filtered Time Series By Nalan Basturk; Cem Cakmakli; Pinar Ceyhan; Herman K. van Dijk
  26. What can the Big Five Personality Factors contribute to explain Small-Scale Economic Behavior? By Julia Muller; Christiane Schwieren

  1. By: Cees Diks (CeNDEF, University of Amsterdam); Valentyn Panchenko (University of New South Wales); Oleg Sokolinskiy (Rutgers Business School); Dick van Dijk (Econometric Institute, Erasmus University Rotterdam)
    Abstract: This paper develops a testing framework for comparing the predictive accuracy of copula-based multivariate density forecasts, focusing on a specific part of the joint distribution. The test is framed in the context of the Kullback-Leibler Information Criterion, but using (out-of-sample) conditional likelihood and censored likelihood in order to focus the evaluation on the region of interest. Monte Carlo simulations document that the resulting test statistics have satisfactory size and power properties in small samples. In an empirical application to daily exchange rate returns we find evidence that the dependence structure varies with the sign and magnitude of returns, such that different parametric copula models achieve superior forecasting performance in different regions of the support. Our analysis highlights the importance of allowing for lower and upper tail dependence for accurate forecasting of common extreme appreciation and depreciation of different currencies.
    Keywords: Copula-based density forecast, Kullback-Leibler Information Criterion, out-of-sample forecast evaluation
    JEL: C12 C14 C32 C52 C53
    Date: 2013–04–19
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013061&r=for
  2. By: Dick van Dijk (Erasmus University Rotterdam); Siem Jan Koopman (VU University Amsterdam); Michel van der Wel (Erasmus University Rotterdam, CREATES, Aarhus); Jonathan H. Wright (Johns Hopkins University)
    Abstract: Many economic studies on inflation forecasting have found favorable results when inflation is modeled as a stationary process around a slowly time-varying trend. In contrast, the existing studies on interest rate forecasting either treat yields as being stationary, without any shifting endpoints, or treat yields as a random walk process. In this study we consider the problem of forecasting the term structure of interest rates with the assumption that the yield curve is driven by factors that are stationary around a time-varying trend. We compare alternative ways of modeling the time-varying trend. We find that allowing for shifting endpoints in yield curve factors can provide gains in the out-of-sample predictive accuracy, relative to stationary and random walk benchmarks. The results are both economically and statistically significant.
    Keywords: term structure of interest rates, forecasting, non-stationarity, survey forecasts, yield curve
    JEL: C32 E43 G17
    Date: 2012–07–19
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2012076&r=for
  3. By: Eran Raviv (Erasmus University Rotterdam)
    Abstract: When the yield curve is modelled using an affine factor model, residuals may still contain relevant information and do not adhere to the familiar white noise assumption. This paper proposes a pragmatic way to improve out of sample performance for yield curve forecasting. The proposed adjustment is illustrated via a pseudo out-of-sample forecasting exercise implementing the widely used Dynamic Nelson Siegel model. Large improvement in forecasting performance is achieved throughout the curve for different forecasting horizons. Results are robust to different time periods, as well as to different model specifications.
    Keywords: Yield curve; Nelson Siegel; Time varying loadings; Factor models
    JEL: E43 E47 G17
    Date: 2013–03–07
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013041&r=for
  4. By: Geert Mesters (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam)
    Abstract: We study the forecasting of the yearly outcome of the Boat Race between Cambridge and Oxford. We compare the relative performance of different dynamic models for forty years of forecasting. Each model is defined by a binary density conditional on a latent signal that is specified as a dynamic stochastic process with fixed predictors. The out-of-sample predictive ability of the models is compared between each other by using a variety of loss functions and predictive ability tests. We find that the model with its latent signal specified as an autoregressive process cannot be outperformed by the other specifications. This model is able to correctly forecast 30 out of 40 outcomes of the Boat Race.
    Keywords: Binary time series, Predictive ability, Non-Gaussian state space model
    JEL: C32 C35
    Date: 2012–10–23
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2012110&r=for
  5. By: Jiaqi Chen; Michael L. Tindall
    Abstract: This paper describes the structure of a rule-based econometric forecasting system designed to produce multi-equation econometric models. The paper describes the functioning of a working system which builds the econometric forecasting equation for each series submitted and produces forecasts of the series. The system employs information criteria and cross validation in the equation building process, and it uses Bayesian model averaging to combine forecasts of individual series. The system outperforms standard benchmarks for a variety of national economic datasets.
    Keywords: Econometrics
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:fip:feddop:1:x:1&r=for
  6. By: Chia-Lin Chang (National Chung Hsing University Taichung); Philip Hans Franses (Erasmus University Rotterdam); Michael McAleer (Erasmus University Rotterdam, Complutense University of Madrid, Kyoto University)
    Abstract: Many macroeconomic forecasts and forecast updates like those from IMF and OECD typically involve both a model component, which is replicable, as well as intuition, which is non-replicable. Intuition is expert knowledge possessed by a forecaster. If forecast updates are progressive, forecast updates should become more accurate, on average, as the actual value is approached. Otherwise, forecast updates would be neutral. The paper proposes a methodology to test whether macroeconomic forecast updates are progressive, where the interaction between model and intuition is explicitly taken into account. The data set for the empirical analysis is for Taiwan, where we have three decades of quarterly data available of forecasts and their updates of the inflation rate and real GDP growth rate. Our empirical results suggest that the forecast updates for Taiwan are progressive, and that progress can be explained predominantly by improved intuition.
    Keywords: Macroeconomic forecasts, econometric models, intuition, progressive forecast updates, forecast errors
    JEL: C53 C22 E27 E37
    Date: 2013–03–25
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013049&r=for
  7. By: Falk Brauning (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam)
    Abstract: We explore a new approach to the forecasting of macroeconomic variables based on a dynamic factor state space analysis. Key economic variables are modeled jointly with principal components from a large time series panel of macroeconomic indicators using a multivariate unobserved components time series model. When the key economic variables are observed at a low frequency and the panel of macroeconomic variables is at a high frequency, we can use our approach for both nowcasting and forecasting purposes. Given a dynamic factor model as the data generation process, we provide Monte Carlo evidence for the finite-sample justification of our parsimonious and feasible approach. We also provide empirical evidence for a U.S. macroeconomic dataset. The unbalanced panel contain quarterly and monthly variables. The forecasting accuracy is measured against a set of benchmark models. We conclude that our dynamic factor state space analysis can lead to higher forecasting precisions when panel size and time series dimensions are moderate.
    Keywords: Kalman filter, Mixed frequency; Nowcasting, Principal components, State space model, Unobserved Components Time Series Model
    JEL: C33 C53 E17
    Date: 2012–04–20
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2012042&r=for
  8. By: Bert de Bruijn (Erasmus University Rotterdam); Philip Hans Franses (Erasmus University Rotterdam)
    Abstract: A Forecast Support System (FSS), which generates sales forecasts, is a sophisticated business analytical tool that can help to improve targeted business decisions. Many companies use such a tool, although at the same time they may allow managers to quote their own forecasts. These sales forecasters (managers) can take the FSS output as their input, but they can also fully ignore the FSS out- comes. We propose a methodology that allows to evaluate the forecast accuracy of these managers, relative to the FSS, while taking aboard latent variation across managers' behavior. We show that the results, here for a large Germany-based pharmaceutical company, can in fact be used to manage the sales forecasters by giving clear-cut recommendations for improvement.
    Keywords: Forecast Support System; Sales forecasters; Forecast accuracy
    JEL: M11 M31
    Date: 2012–12–03
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2012131&r=for
  9. By: Te Bao (University of Amsterdam); John Duffy (University of Pittsburgh); Cars Hommes (University of Amsterdam)
    Abstract: Rational Expectations (RE) models have two crucial dimensions: 1) agents correctly forecast future prices given all available information, and 2) given expectations, agents solve optimization problems and these solutions in turn determine actual price realizations. Experimental testing of such models typically focuses on only one of these two dimensions. In this paper we consider both forecasting and optimization decisions in an experimental cobweb economy. We report results from four experimental treatments: 1) subjects form forecasts only, 2) subjects determine quantity only (solve an optimization problem), 3) they do both and 4) they are paired in teams and one member is assigned the forecasting role while the other is assigned the optimization task. All treatments converge to Rational Expectation Equilibrium (REE), but at very different speeds. We observe that performance is the best in treatment 1) and worst in the treatment 3). Most forecasters use a n adaptive expectations rule. Subjects are less likely to make conditionally optimal production decision for given forecasts in treatment 3) where the forecast is made by themselves, than in treatment 4) where the forecast is made by the other member of their team, which suggests that "two heads are better than one" in finding REE.
    Keywords: Learning, Rational Expectations, Optimization, Experimental Economics, Bounded Rationality
    JEL: C91 C92 D83 D84
    Date: 2012–02–17
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2012015&r=for
  10. By: Monica Billio (University of Venice, GRETA Assoc. and School for Advanced Studies in Venice); Roberto Casarin (University of Venice, GRETA Assoc. and School for Advanced Studies in Venice); Francesco Ravazzolo (Norges Bank and BI Norwegian Business School); Herman K. van Dijk (Erasmus University Rotterdam, VU University Amsterdam)
    Abstract: We propose a Bayesian combination approach for multivariate predictive densities which relies upon a distributional state space representation of the combination weights. Several specifications of multivariate time-varying weights are introduced with a particular focus on weight dynamics driven by the past performance of the predictive densities and the use of learning mechanisms. In the proposed approach the model set can be incomplete, meaning that all models can be individually misspecified. A Sequential Monte Carlo method is proposed to approximate the filtering and predictive densities. The combination approach is assessed using statistical and utility-based performance measures for evaluating density forecasts. Simulation results indicate that, for a set of linear autoregressive models, the combination strategy is successful in selecting, with probability close to one, the true model when the model set is complete and it is able to detect parameter instability when the model set includes the true model that has generated subsamples of data. For the macro series we find that incompleteness of the models is relatively large in the 70's, the beginning of the 80's and during the recent financial crisis, and lower during the Great Moderation. With respect to returns of the S&P 500 series, we find that an investment strategy using a combination of predictions from professional forecasters and from a white noise model puts more weight on the white noise model in the beginning of the 90's and switches to giving more weight to the professional forecasts over time.
    Keywords: Density Forecast Combination, Survey Forecast, Bayesian Filtering, Sequential Monte Carlo
    JEL: C11 C15 C53 E37
    Date: 2012–11–07
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2012118&r=for
  11. By: Siem Jan Koopman (VU University Amsterdam); Andre Lucas (VU University Amsterdam); Marcel Scharth (VU University Amsterdam)
    Abstract: We study whether and when parameter-driven time-varying parameter models lead to forecasting gains over observation-driven models. We consider dynamic count, intensity, duration, volatility and copula models, including new specifications that have not been studied earlier in the literature. In an extensive Monte Carlo study, we find that observation-driven generalised autoregressive score (GAS) models have similar predictive accuracy to correctly specified parameter-driven models. In most cases, differences in mean squared errors are smaller than 1% and model confidence sets have low power when comparing these two alternatives. We also find that GAS models outperform many familiar observation-driven models in terms of forecasting accuracy. The results point to a class of observation-driven models with comparable forecasting ability to parameter-driven models, but lower computational complexity.
    Keywords: Generalised autoregressive score model, Importance sampling, Model confidence set, Nonlinear state space model, Weibull-gamma mixture
    JEL: C53 C58 C22
    Date: 2012–03–06
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2012020&r=for
  12. By: Chia-Lin Chang (National Chung Hsing University, Taichung, Taiwan); Bert de Bruijn (Erasmus University Rotterdam); Philip Hans Franses (Erasmus University Rotterdam); Michael McAleer (Erasmus University Rotterdam)
    Abstract: It is common practice to evaluate fixed-event forecast revisions in macroeconomics by regressing current forecast revisions on one-period lagged forecast revisions. Under weak-form (forecast) efficiency, the correlation between the current and one-period lagged revisions should be zero. The empirical findings in the literature suggest that this null hypothesis of zero correlation is rejected frequently, where the correlation can be either positive (which is widely interpreted in the literature as “smoothing”) or negative (which is widely interpreted as “over-reacting”). We propose a methodology to interpret such non-zero correlations in a straightforward and clear manner. Our approach is based on the assumption that numerical forecasts can be decomposed into both an econometric model and random expert intuition. We show that the interpretation of the sign of the correlation between the current and one-period lagged revisions depends on the process governing intuition, and the current and lagged correlations between intuition and news (or shocks to the numerical forecasts). It follows that the estimated non-zero correlation cannot be given a direct interpretation in terms of smoothing or over-reaction.
    Keywords: Evaluating forecasts, Macroeconomic forecasting, Rationality, Intuition, Weak-form efficiency, Fixed-event forecasts
    JEL: C22 C53 E27 E37
    Date: 2013–04–11
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013057&r=for
  13. By: Guorui Bian (East China Normal University); Michael McAleer (Erasmus University Rotterdam, Kyoto University, Complutense University of Madrid); Wing-Keung Wong (Hong Kong Baptist University)
    Abstract: In this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more appropriate for estimating the parameters of the Capital Asset Pricing Model by comparing its performance with least squares estimators (LSE) on the monthly returns of US portfolios. The empirical results reveal that the MML estimators are more efficient than LSE in terms of the relative efficiency of one-step-ahead forecast mean square error in small samples
    Keywords: Maximum likelihood estimators; Modified maximum likelihood estimators; Student t family; Capital asset pricing model; Robustness
    JEL: C1 C2 G1
    Date: 2013–03–04
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013036&r=for
  14. By: David Ardia (Universite Laval, Quebec, Canada); Lennart Hoogerheide (VU University Amsterdam)
    Abstract: We analyze the impact of the estimation frequency - updating parameter estimates on a daily, weekly, monthly or quarterly basis - for commonly used GARCH models in a large-scale study, using more than twelve years (2000-2012) of daily returns for constituents of the S&P 500 index. We assess the implication for one-day ahead 95% and 99% Value-at-Risk (VaR) forecasts with the test for correct conditional coverage of Christoffersen (1998) and for Expected Shortfall (ES) forecasts with the block-bootstrap test of ES violations of Jalal and Rockinger (2008). Using the false discovery rate methodology of Storey (2002) to estimate the percentage of stocks for which the model yields correct VaR and ES forecasts, we reach the following conclusions. First, updating the parameter estimates of the GARCH equation on a daily frequency improves only marginally the performance of the model, compared with weekly, monthly or even quarterly updates. The 90% confidence bands overlap, reflecting that the performance is not significantly different. Second, the asymmetric GARCH model with non-parametric kernel density estimate performs well; it yields correct VaR and ES forecasts for an estimated 90% to 95% of the S&P 500 constituents. Third, specifying a Student-<I>t</I> (or Gaussian) innovations' density yields substantially and significantly worse forecasts, especially for ES. In sum, the somewhat more advanced model with infrequently updated parameter estimates yields much better VaR and ES forecasts than simpler models with daily updated parameter estimates.
    Keywords: GARCH, Value-at-Risk, Expected Shortfall, equity, frequency, false discovery rate
    JEL: C12 C22 C58 G17 G32
    Date: 2013–03–21
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013047&r=for
  15. By: Lukasz Gatarek (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam); Lennart Hoogerheide (VU University Amsterdam); Koen Hooning (Delft University of Technology); Herman K. van Dijk (Econometric Institute, Erasmus University Rotterdam, and VU University Amsterdam)
    Abstract: Accurate prediction of risk measures such as Value at Risk (VaR) and Expected Shortfall (ES) requires precise estimation of the tail of the predictive distribution. Two novel concepts are introduced that offer a specific focus on this part of the predictive density: the censored posterior, a posterior in which the likelihood is replaced by the censored likelihood; and the censored predictive likelihood, which is used for Bayesian Model Averaging. We perform extensive experiments involving simulated and empirical data. Our results show the ability of these new approaches to outperform the standard posterior and traditional Bayesian Model Averaging techniques in applications of Value-at-Risk prediction in GARCH models.
    Keywords: censored likelihood, censored posterior, censored predictive likelihood, Bayesian Model Averaging, Value at Risk, Metropolis-Hastings algorithm.
    JEL: C11 C15 C22 C51 C53 C58 G17
    Date: 2013–04–15
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013060&r=for
  16. By: Bert de Bruijn (Erasmus University Rotterdam); Philip Hans Franses (Erasmus University Rotterdam)
    Abstract: Earnings forecasts can be useful for investment decisions. Research on earnings forecasts has focused on forecast performance in relation to firm characteristics, on categorizing the analysts into groups with similar behaviour and on the effect of an earnings announcement by thefirm on future earnings forecasts. In this paper we investigate the factors that determine the value of the forecast and also investigate to what extent the timing of the forecast can be modeled. We propose a novel methodology that allows for such an investigation. As an illustration we analyze within-year earnings forecasts for AMD in the period 1997 to 2011, where the data are obtained from the I/B/E/S database. Our empirical findings suggest clear drivers of the value and the timing of the earnings forecast. We thus show that not only the forecasts themselves are predictable, but that also the timing of the quotes is predictable to some extent.
    Keywords: Earnings Forecasts; Earnings Announcements; Financial Markets; Financial Analysts
    JEL: G17 G24 M41
    Date: 2012–07–12
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2012067&r=for
  17. By: Jiangyu Ji (VU University Amsterdam); Andre Lucas (VU University Amsterdam, and Duisenberg school of finance)
    Abstract: We propose a new semiparametric observation-driven volatility model where the form of the error density directly influences the volatility dynamics. This feature distinguishes our model from standard semiparametric GARCH models. The link between the estimated error density and the volatility dynamics follows from the application of the generalized autoregressive score framework of Creal, Koopman, and Lucas (2012). We provide simulated evidence for the estimation efficiency and forecast accuracy of the new model, particularly if errors are fat-tailed and possibly skewed. In an application to equity return data we find that the model also does well in density forecasting.
    Keywords: volatility clustering, Generalized Autoregressive Score model, kernel density estimation, density forecast evaluation
    JEL: C10 C14 C22
    Date: 2012–05–22
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2012055&r=for
  18. By: Fady Barsoum (Department of Economics, University of Konstanz, Germany); Sandra Stankiewicz (Department of Economics, University of Konstanz, Germany)
    Abstract: For modelling mixed-frequency data with business cycle pattern we introduce the Markovswitching Mixed Data Sampling model with unrestricted lag polynomial (MS-U-MIDAS). Usually models of the MIDAS-class use lag polynomials of a specific function, which impose some structure on the weights of regressors included in the model. This may deteriorate the predictive power of the model if the imposed structure differs from the data generating process. When the difference between the available data frequencies is small and there is no risk of parameter proliferation, using an unrestricted lag polynomial might not only simplify the model estimation, but also improve its forecasting performance. We allow the parameters of the MIDAS model with unrestricted lag polynomial to change according to a Markov-switching scheme in order to account for the business cycle pattern observed in many macroeconomic variables. Thus we combine the unrestricted MIDAS with a Markov-switching approach and propose a new Markov-switching MIDAS model with unrestricted lag polynomial (MS-U-MIDAS). We apply this model to a large dataset with the help of factor analysis. Monte Carlo experiments and an empirical forecasting comparison carried out for the U.S. GDP growth show that the models of the MS-UMIDAS class exhibit similar or better nowcasting and forecasting performance than their counterparts with restricted lag polynomials.
    Keywords: Markov-switching, Business cycle, Mixed-frequency data analysis, Forecastsing
    JEL: C22 C53 E37
    Date: 2013–05–08
    URL: http://d.repec.org/n?u=RePEc:knz:dpteco:1310&r=for
  19. By: Roberto Casarin (University Ca' Foscari of Venice and GRETA); Stefano Grassi (CREATES, Aarhus University); Francesco Ravazzolo (Norges Bank, and BI Norwegian Business School); Herman K. van Dijk (Erasmus University Rotterdam, and VU University Amsterdam)
    Abstract: This paper presents the Matlab package DeCo (Density Combination) which is based on the paper by Billio et al. (2013) where a constructive Bayesian approach is presented for combining predictive densities originating from different models or other sources of information. The combination weights are time-varying and may depend on past predictive forecasting performances and other learning mechanisms. The core algorithm is the function DeCo which applies banks of parallel Sequential Monte Carlo algorithms to filter the time-varying combination weights. The DeCo procedure has been implemented both for standard CPU computing and for Graphical Process Unit (GPU) parallel computing. For the GPU implementation we use the Matlab parallel computing toolbox and show how to use General Purposes GPU computing almost effortless. This GPU implementation comes with a speed up of the execution time up to seventy times compared to a standard CPU Matlab implementation on a multicore CPU. We show the use of the package and the computational gain of the GPU version, through some simulation experiments and empirical applications.
    Keywords: Density Forecast Combination, Sequential Monte Carlo, Parallel Computing, GPU, Matlab
    JEL: C11 C15 C53 E37
    Date: 2013–04–09
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013055&r=for
  20. By: Cars Hommes (University of Amsterdam); Mei Zhu (Shanghai University of Finance and Economics)
    Abstract: We propose behavioral learning equilibria as a plausible explanation of coordination of individual expectations and aggregate phenomena such as excess volatility in stock prices and high persistence in inflation. Boundedly rational agents use a simple univariate linear forecasting rule and correctly forecast the unconditional sample mean and first-order sample autocorrelation. In the long run, agents learn the best univariate linear forecasting rule, without fully recognizing the structure of the economy. The simplicity of behavioral learning equilibria makes coordination of individual expectations on such an aggregate outcome more likely. In a first application, an asset pricing model with AR(1) dividends, a unique behavioral learning equilibrium exists characterized by high persistence and excess volatility, and it is stable under learning. In a second application, the New Keynesian Phillips curve, multiple equilibria co-exist, learning exhibits path dep endence and inflation may switch between low and high persistence regimes.
    Keywords: Bounded rationality; Stochastic consistent expectations equilibrium; Adaptive learning; Excess volatility; Inflation persistence
    JEL: E30 C62 D83 D84
    Date: 2013–01–14
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013014&r=for
  21. By: Michael McAleer (Erasmus University Rotterdam); Juan-Ángel Jiménez-Martín (Complutense University of Madrid); Teodosio Pérez-Amaral (Complutense University of Madrid)
    Abstract: The Basel II Accord requires that banks and other Authorized Deposit-taking Institutions (ADIs) communicate their daily risk forecasts to the appropriate monetary authorities at the beginning of each trading day, using one or more risk models to measure Value-at-Risk (VaR). The risk estimates of these models are used to determine capital requirements and associated capital costs of ADIs, depending in part on the number of previous violations, whereby realised losses exceed the estimated VaR. In this paper we define risk management in terms of choosing from a variety of risk models, and discuss the selection of optimal risk models. A new approach to model selection for predicting VaR is proposed, consisting of combining alternative risk models, and we compare conservative and aggressive strategies for choosing between VaR models. We then examine how different risk management strategies performed during the 2008-09 global financial crisis. These issues are illustrated using Standard and Poor’s 500 Composite Index.
    Keywords: Value-at-Risk (VaR), daily capital charges, violation penalties, optimizing strategy, risk forecasts, aggressive or conservative risk management strategies, Basel Accord, global financial crisis
    JEL: G32 G11 G17 C53 C22
    Date: 2013–01–08
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013010&r=for
  22. By: Chia-Lin Chang (National Chung Hsing University); Michael McAleer (Erasmus University Rotterdam, Complutense University of Madrid, Kyoto University)
    Abstract: Experts possess knowledge and information that are not publicly available. The paper is concerned with forecasting academic journal quality and research impact using a survey of international experts from a national project on ranking academic finance journals in Taiwan. A comparison is made with publicly available bibliometric data, namely the Thomson Reuters ISI Web of Science citations database (hereafter ISI) for the Business - Finance (hereafter Finance) category. The paper analyses the leading international journals in Finance using expert scores and quantifiable Research Assessment Measures (RAMs), and highlights the similarities and differences in the expert scores and alternative RAMs, where the RAMs are based on alternative transformations of citations taken from the ISI database. Alternative RAMs may be calculated annually or updated daily to answer the perennial questions as to When, Where and How (frequently) published papers are cited (see Chang et al. (2011a, b, c)). The RAMs include the most widely used RAM, namely the classic 2-year impact factor including journal self citations (2YIF), 2-year impact factor excluding journal self citations (2YIF*), 5-year impact factor including journal self citations (5YIF), Immediacy (or zero-year impact factor (0YIF)), Eigenfactor, Article Influence, C3PO (Citation Performance Per Paper Online), h-index, PI-BETA (Papers Ignored - By Even The Authors), 2-year Self-citation Threshold Approval Ratings (2Y-STAR), Historical Self-citation Threshold Approval Ratings (H-STAR), Impact Factor Inflation (IFI), and Cited Article Influence (CAI). As data are not available for 5YIF, Article Influence and CAI for 13 of the leading 34 journals considered, 10 RAMs are analysed for 21 highly-cited journals in Finance. The harmonic mean of the ranks of the 10 RAMs for the 34 highly-cited journals are also presented. It is shown that emphasizing the 2-year impact factor of a journal, which partly answers the question as to When published papers are cited, to the exclusion of other informative RAMs, which answer Where and How (frequently) published papers are cited, can lead to a distorted evaluation of journal impact and influence relative to the Harmonic Mean rankings. A linear regression model is used to forecast expert scores on the basis of RAMs that capture journal impact, journal policy, the number of high quality papers, and quantitative information about a journal. The robustness of the rankings is also analysed.
    Keywords: Expert scores, Journal quality, RAMs, Impact factor, IFI, C3PO, PI-BETA, STAR, Eigenfactor, Article Influence, h-index, harmonic mean, robustness
    JEL: C18 C81 C83
    Date: 2013–02–18
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013029&r=for
  23. By: Adriana Cornea (University of Exeter); Cars Hommes (University of Amsterdam); Domenico Massaro (University of Amsterdam)
    Abstract: In this paper we develop and estimate a behavioral model of inflation dynamics with monopolistic competition, staggered price setting and heterogeneous firms. In our stylized framework there are two groups of price setters, fundamentalists and naive. Fundamentalists are forward-looking in the sense that they believe in a present-value relationship between inflation and real marginal costs, while naive are backward-looking, using the simplest rule of thumb, naive expectations, to forecast future inflation. Agents are allowed to switch between these different forecasting strategies conditional on their recent relative forecasting performance. The estimation results support behavioral heterogeneity and the evolutionary switching mechanism. We show that there is substantial time variation in the weights of forward-looking and backward-looking behavior. Although on average the majority of firms use the simple backward-looking rule, the market has phases in which it is dominated by either the fundamentalists or the naive agents.
    Keywords: Inflation, Phillips Curve, Heterogeneous Expectations, Evolutionary Selection
    JEL: E31 E52 C22
    Date: 2013–01–14
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013015&r=for
  24. By: Siem Jan Koopman (VU University Amsterdam); Rutger Lit (VU University Amsterdam)
    Abstract: Attack and defense strengths of football teams vary over time due to changes in the teams of players or their managers. We develop a statistical model for the analysis and forecasting of football match results which are assumed to come from a bivariate Poisson distribution with intensity coefficients that change stochastically over time. This development presents a novelty in the statistical time series analysis of match results from football or other team sports. Our treatment is based on state space and importance sampling methods which are computationally efficient. The out-of-sample performance of our methodology is verified in a betting strategy that is applied to the match outcomes from the 2010/11 and 2011/12 seasons of the English Premier League. We show that our statistical modeling framework can produce a significant positive return over the bookmaker's odds.
    Keywords: Betting, Importance sampling, Kalman filter smoother, Non-Gaussian multivariate time series models, Sport statistics
    JEL: C32 C35
    Date: 2012–09–27
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2012099&r=for
  25. By: Nalan Basturk (Erasmus University Rotterdam); Cem Cakmakli (University of Amsterdam); Pinar Ceyhan (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam, and VU University Amsterdam)
    Abstract: Changing time series properties of US inflation and economic activity are analyzed within a class of extended Phillips Curve (PC) models. First, the misspecification effects of mechanical removal of low frequency movements of these series on posterior inference of a basic PC model are analyzed using a Bayesian simulation based approach. Next, structural time series models that describe changing patterns in low and high frequencies and backward as well as forward inflation expectation mechanisms are incorporated in the class of extended PC models. Empirical results indicate that the proposed models compare favorably with existing Bayesian Vector Autoregressive and Stochastic Volatility models in terms of fit and predictive performance. Weak identification and dynamic persistence appear less important when time varying dynamics of high and low frequencies are carefully modeled. Modeling inflation expectations using survey data and adding level shifts and stochastic volatility improves substantially in sample fit and out of sample predictions. No evidence is found of a long run stable cointegration relation between US inflation and marginal costs. Tails of the complete predictive distributions indicate an increase in the probability of disinflation in recent years.
    Keywords: New Keynesian Phillips curve, unobserved components, level shifts, inflation expectations
    JEL: C11 C32 E31 E37
    Date: 2013–01–10
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2013011&r=for
  26. By: Julia Muller (Erasmus University Rotterdam); Christiane Schwieren (University of Heidelberg)
    Abstract: Growing interest in using personality variables in economic research leads to the question whether personality as measured by psychology is useful to predict economic behavior. Is it reasonable to expect values on personality scales to be predictive of behavior in economic games? It is undoubted that personality can influence large-scale economic outcomes. Whether personality variables can also be used to understand micro-behavior in economic games is however less clear. We discuss reasons in favor and against this assumption and test in our own experiment, whether and which personality factors are useful in predicting behavior in the trust or investment game. We can also use the trust game to understand how personality measures fare relatively in predicting behavior when situational constraints vary in strength. This approach can help economists to better understand what to expect from the inclusion of personality variables in their models and experiments, and where further research might be useful and needed.
    Keywords: Personality, Big Five, Five Factor Model, Incentives, Experiment, Trust Game
    JEL: C72 C91 D03
    Date: 2012–03–26
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:2012028&r=for

This nep-for issue is ©2013 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.