|
on Forecasting |
By: | Marcellino, Massimiliano |
Abstract: | A theoretical model for growth or inflation should be able to reproduce the empirical features of these variables better than competing alternatives. Therefore, it is common practice in the literature, whenever a new model is suggested, to compare its performance with that of a benchmark model. However, while the theoretical models become more and more sophisticated, the benchmark typically remains a simple linear time series model. Recent examples are provided, e.g., by articles in the real business cycle literature or by new-keynesian studies on inflation persistence. While a time series model can provide a reasonable benchmark to evaluate the value added of economic theory relative to the pure explanatory power of the past behavior of the variable, recent developments in time series analysis suggest that more sophisticated time series models could provide more serious benchmarks for economic models. In this paper we evaluate whether these complicated time series models can really outperform standard linear models for GDP growth and inflation, and should therefore substitute them as benchmarks for economic theory based models. Since a complicated model specification can over-fit in sample, i.e. the model can spuriously perform very well compared to simpler alternatives, we conduct the model comparison based on the out of sample forecasting performance. We consider a large variety of models and evaluation criteria, using real time data and a sophisticated bootstrap algorithm to evaluate the statistical significance of our results. Our main conclusion is that in general linear time series models can be hardly beaten if they are carefully specified, and therefore still provide a good benchmark for theoretical models of growth and inflation. However, we also identify some important cases where the adoption of a more complicated benchmark can alter the conclusions of economic analyses about the driving forces of GDP growth and inflation. Therefore, comparing theoretical models also with more sophisticated time series benchmarks can guarantee more robust conclusions. |
Keywords: | growth; inflation; non-linear models; time-varying models |
JEL: | C2 C53 E30 |
Date: | 2006–12 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:6012&r=for |
By: | Spencer D. Krane |
Abstract: | Economic activity depends on agents' real-time beliefs regarding the persistence in the shocks they currently perceive to be hitting the economy. This paper uses an unobserved components model of forecast revisions to examine how the professional forecasters comprising the Blue Chip Economic Consensus have viewed such shocks to GDP over the past twenty years. The model estimates that these forecasters attribute more of the variance in the shock to GDP to permanent factors than to transitory developments. Both shocks are significantly correlated with incoming high-frequency indicators of economic activity; but for the permanent component, the correlation is driven by recessions or other periods when activity was weak. The forecasters' shocks also differ noticeably from those generated by some simple econometric models. Taken together, the results suggest that agents? expectations likely are based on broader information sets than those used to specify most empirical models and that the mechanisms generating expectations may differ with the perceived state of the business cycle. |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedhwp:wp-06-19&r=for |
By: | David Jamieson Bolder |
Abstract: | Modelling term-structure dynamics is an important component in measuring and managing the exposure of portfolios to adverse movements in interest rates. Model selection from the enormous term-structure literature is far from obvious and, to make matters worse, a number of recent papers have called into question the ability of some of the more popular models to adequately describe interest rate dynamics. The author, in attempting to find a relatively simple term-structure model that does a reasonable job of describing interest rate dynamics for risk-management purposes, examines two sets of models. The first set involves variations of the Gaussian affine term-structure model by modestly building on the recent work of Dai and Singleton (2000) and Duffee (2002). The second set includes and extends Diebold and Li (2003). After working through the mathematical derivation and estimation of these models, the author compares and contrasts their performance on a number of in- and out-of-sample forecasting metrics, their ability to capture deviations from the expectations hypothesis, and their predictions in a simple portfolio-optimization setting. He finds that the extended Nelson-Siegel model and an associated generalization, what he terms the "exponential-spline model," provide the most appealing modelling alternatives when considering the various model criteria. |
Keywords: | Interest rates; Econometric and statistical methods; Financial markets |
JEL: | C0 C6 E4 G1 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:bca:bocawp:06-48&r=for |
By: | Gregory H. Bauer; Clara Vega |
Abstract: | Existing studies using low-frequency data have found that macroeconomic shocks contribute little to international stock market covariation. However, these papers have not accounted for the presence of asymmetric information where sophisticated investors generate private information about the fundamentals that drive returns in many countries. In this paper, we use a new microstructure data set to better identify the effects of private and public information shocks about U.S. interest rates and equity returns. High-frequency private and public information shocks help forecast domestic money and equity returns over daily and weekly intervals. In addition, these shocks are components of factors that are priced in a model of the cross section of international returns. Linking private information to U.S. macroeconomic factors is useful for many domestic and international asset pricing tests. |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedgif:872&r=for |
By: | John Geweke; Joel Horowitz; M. Hashem Pesaran |
Abstract: | As a unified discipline, econometrics is still relatively young and has been transforming and expanding very rapidly over the past few decades. Major advances have taken place in the analysis of cross sectional data by means of semi-parametric and non-parametric techniques. Heterogeneity of economic relations across individuals, firms and industries is increasingly acknowledged and attempts have been made to take them into account either by integrating out their effects or by modeling the sources of heterogeneity when suitable panel data exists. The counterfactual considerations that underlie policy analysis and treatment evaluation have been given a more satisfactory foundation. New time series econometric techniques have been developed and employed extensively in the areas of macroeconometrics and finance. Non-linear econometric techniques are used increasingly in the analysis of cross section and time series observations. Applications of Bayesian techniques to econometric problems have been given new impetus largely thanks to advances in computer power and computational techniques. The use of Bayesian techniques have in turn provided the investigators with a unifying framework where the tasks of forecasting, decision making, model evaluation and learning can be considered as parts of the same interactive and iterative process; thus paving the way for establishing the foundation of “real time econometrics”. This paper attempts to provide an overview of some of these developments. |
Keywords: | history of econometrics, microeconometrics, macroeconometrics, Bayesian econometrics, nonparametric and semi-parametric analysis |
JEL: | C10 C20 C30 C40 C50 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_1870&r=for |