nep-ets New Economics Papers
on Econometric Time Series
Issue of 2006‒04‒01
nine papers chosen by
Yong Yin
SUNY at Buffalo

  1. Combining forecasts from nested models By Todd E. Clark; Michael W. McCracken
  2. Impulse Response Functions from Structural Dynamic Factor Models:A Monte Carlo Evaluation By George Kapetanios and Massimiliano Marcellino
  3. Reconciling the Return Predictability Evidence By Martin Lettau; Stijn Van Nieuwerburgh
  4. Variation, jumps, market frictions and high frequency data in financial econometrics By Ole E. Barndorff-Nielsen; Neil Shephard
  5. Stochastic Volatility By Neil Shephard
  6. On a multi-timescale statistical feedback model for volatility fluctuations By Lisa Borland; Jean-Philippe Bouchaud
  7. Experts' earning forecasts: bias, herding and gossamer information By Olivier Guedj; Jean-Philippe Bouchaud
  8. Large dimension forecasting models and random singular value spectra By Jean-Philippe Bouchaud; Laurent Laloux; M. Augusta Miceli; Marc Potters
  9. Testing for nonlinearity in mean in the presence of heteroskedasticity By Stan Hurn; Ralf Becker

  1. By: Todd E. Clark; Michael W. McCracken
    Abstract: Motivated by the common finding that linear autoregressive models forecast better than models that incorporate additional information, this paper presents analytical, Monte Carlo, and empirical evidence on the effectiveness of combining forecasts from nested models. In our analytics, the unrestricted model is true, but as the sample size grows, the DGP converges to the restricted model. This approach captures the practical reality that the predictive content of variables of interest is often low. We derive MSE-minimizing weights for combining the restricted and unrestricted forecasts. In the Monte Carlo and empirical analysis, we compare the effectiveness of our combination approach against related alternatives, such as Bayesian estimation.
    Date: 2006
  2. By: George Kapetanios and Massimiliano Marcellino
    Abstract: The estimation of structural dynamic factor models (DFMs) for large sets of variables is attracting considerable attention. In this paper we briefly review the underlying theory and then compare the impulse response functions resulting from two alternative estimation methods for the DFM. Finally, as an example, we reconsider the issue of the identification of the driving forces of the US economy, using data for about 150 macroeconomic variables.
  3. By: Martin Lettau; Stijn Van Nieuwerburgh
    Abstract: Evidence of stock return predictability by financial ratios is still controversial, as documented by inconsistent results for in-sample and out-of-sample regressions and by substantial parameter instability. This paper shows that these seemingly incompatible results can be reconciled if the assumption of a fixed steady-state mean of the economy is relaxed. We find strong empirical evidence in support of shifts in the steady-state and propose simple methods to adjust financial ratios for such shifts. The forecasting relationship of adjusted price ratios and future returns is statistically significant and stable over time. We also show that shifts in the steady-state are responsible for the parameter instability and poor out-of-sample performance of unadjusted price ratios that are found in the data. Our conclusions hold for a variety of financial ratios and are robust to changes in the econometric technique used to estimate shifts in the steady-state.
    JEL: G1 G12 G11 C53
    Date: 2006–03
  4. By: Ole E. Barndorff-Nielsen (University of Aarhus); Neil Shephard (Nuffield College, Oxford University)
    Abstract: We will review the econometrics of non-parametric estimation of the components of the variation of asset prices. This very active literature has been stimulated by the recent advent of complete records of transaction prices, quote data and order books. In our view the interaction of the new data sources with new econometric methodology is leading to a paradigm shift in one of the most important areas in econometrics: volatility measurement, modelling and forecasting. We will describe this new paradigm which draws together econometrics with arbitrage free financial economics theory. Perhaps the two most influential papers in this area have been Andersen, Bollerslev, Diebold and Labys(2001) and Barndorff-Nielsen and Shephard(2002), but many other papers have made important contributions. This work is likely to have deep impacts on the econometrics of asset allocation and risk management. One of our observations will be that inferences based on these methods, computed from observed market prices and so under the physical measure, are also valid as inferences under all equivalent measures. This puts this subject also at the heart of the econometrics of derivative pricing. One of the most challenging problems in this context is dealing with various forms of market frictions, which obscure the efficient price from the econometrician. Here we will characterise four types of statistical models of frictions and discuss how econometricians have been attempting to overcome them.
    Date: 2005–07–14
  5. By: Neil Shephard (Nuffield College, Oxford University)
    Date: 2005–07–01
  6. By: Lisa Borland (Evnine-Vaughan Associates, Inc.); Jean-Philippe Bouchaud (Science & Finance, Capital Fund Management; CEA Saclay;)
    Abstract: We study, both analytically and numerically, an ARCH-like, multiscale model of volatility, which assumes that the volatility is governed by the observed past price changes on different time scales. With a power-law distribution of time horizons, we obtain a model that captures most stylized facts of financial time series: Student-like distribution of returns with a power-law tail, long-memory of the volatility, slow convergence of the distribution of returns towards the Gaussian distribution, multifractality and anomalous volatility relaxation after shocks. At variance with recent multifractal models that are strictly time reversal invariant, the model also reproduces the time assymmetry of financial time series: past large scale volatility influence future small scale volatility. In order to quantitatively reproduce all empirical observations, the parameters must be chosen such that our model is close to an instability, meaning that (a) the feedback effect is important and substantially increases the volatility, and (b) that the model is intrinsically difficult to calibrate because of the very long range nature of the correlations. By imposing the consistency of the model predictions with a large set of different empirical observations, a reasonable range of the parameters value can be determined. The model can easily be generalized to account for jumps, skewness and multiasset correlations.
    JEL: G10
    Date: 2005–07
  7. By: Olivier Guedj (Capital Fund Management); Jean-Philippe Bouchaud (Science & Finance, Capital Fund Management; CEA Saclay;)
    Abstract: We study the statistics of earning forecasts of US, EU, UK and JP stocks during the period 1987-2004. We confirm, on this large data set, that financial analysts are on average over-optimistic and show a pronounced herding behavior. These effects are time dependent, and were particularly strong in the early nineties and during the Internet bubble. We furthermore find that their forecast ability is, in relative terms, quite poor and comparable in quality, a year ahead, to the simplest `no change' forecast. As a result of herding, analysts agree with each other five to ten times more than with the actual result. We have shown that significant differences exist between US stocks and EU stocks, that may partly be explained as a company size effect. Interestingly, herding effects appear to be stronger in the US than in the Eurozone. Finally, we study the correlation of errors across stocks and show that significant sectorization occurs, some sectors being easier to predict than others. These results add to the list of arguments suggesting that the tenets of Efficient Market Theory are untenable.
    JEL: G10
    Date: 2004–10
  8. By: Jean-Philippe Bouchaud (Science & Finance, Capital Fund Management; CEA Saclay;); Laurent Laloux (Science & Finance, Capital Fund Management); M. Augusta Miceli; Marc Potters (Science & Finance, Capital Fund Management)
    Abstract: We present a general method to detect and extract from a finite time sample statistically meaningful correlations between input and output variables of large dimensionality. Our central result is derived from the theory of free random matrices, and gives an explicit expression for the interval where singular values are expected in the absence of any true correlations between the variables under study. Our result can be seen as the natural generalization of the Mar?cenko-Pastur distribution for the case of rectangular correlation matrices. We illustrate the interest of our method on a set of macroeconomic time series.
    Date: 2005–12
  9. By: Stan Hurn; Ralf Becker (School of Economics and Finance, Queensland University of Technology)
    Date: 2006

This nep-ets issue is ©2006 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.