nep-ets New Economics Papers
on Econometric Time Series
Issue of 2007‒04‒21
five papers chosen by
Yong Yin
SUNY at Buffalo

  1. The relationship between ARIMA-GARCH and unobserved component models with GARCH disturbances By Santiago Pellegrini; Esther Ruiz; Antoni Espasa
  2. An Embarrassment of Riches: Forecasting Using Large Panels By Eklund, Jana; Karlsson, Sune
  4. The nearest correlation matrix problem: Solution by differential evolution method of global optimization By Mishra, SK
  5. Evidence Of Chaotic Behavior In American Stock Markets By Espinosa Méndez, Christian

  1. By: Santiago Pellegrini; Esther Ruiz; Antoni Espasa
    Abstract: The objective of this paper is to analyze the consequences of fitting ARIMA-GARCH models to series generated by conditionally heteroscedastic unobserved component models. Focusing on the local level model, we show that the heteroscedasticity is weaker in the ARIMA than in the local level disturbances. In certain cases, the IMA(1,1) model could even be wrongly seen as homoscedastic. Next, with regard to forecasting performance, we show that the prediction intervals based on the ARIMA model can be inappropriate as they incorporate the unit root while the intervals of the local level model can converge to the homoscedastic intervals when the heteroscedasticity appears only in the transitory noise. All the analytical results are illustrated with simulated and real time series.
    Date: 2007–04
  2. By: Eklund, Jana (Bank of England); Karlsson, Sune (Department of Business, Economics, Statistics and Informatics)
    Abstract: The increasing availability of data and potential predictor variables poses new challenges to forecasters. The task of formulating a single forecasting model that can extract all the relevant information is becoming increasingly difficult in the face of this abundance of data. The two leading approaches to addressing this "embarrassment of riches" are philosophically distinct. One approach builds forecast models based on summaries of the predictor variables, such as principal components, and the second approach is analogous to forecast combination, where the forecasts from a multitude of possible models are averaged. Using several data sets we compare the performance of the two approaches in the guise of the diffusion index or factor models popularized by Stock and Watson and forecast combination as an application of Bayesian model averaging. We find that none of the methods is uniformly superior and that no method performs better than, or is outperformed by, a simple AR(p) process.
    Keywords: Bayesian model averaging; Diffusion indexes; GDP growth rate; Inflation rate
    JEL: C11 C51 C52 C53
    Date: 2007–03–31
  3. By: John W. Galbraith (and TKACZ, Greg); Greg Tkacz
    Abstract: For quantities that are approximately stationary, the information content of statistical forecasts tends to decline as the forecast horizon increases, and there exists a maximum horizon beyond which forecasts cannot provide discernibly more information about the variable than is present in the unconditional mean (the content horizon). The pattern of decay of forecast content (or skill) with increasing horizon is well known for many types of meteorological forecasts; by contrast, little generally-accepted information about these patterns or content horizons is available for economic variables. In this paper we attempt to develop more information of this type by estimating content horizons for variety of macroeconomic quantities; more generally, we characterize the pattern of decay of forecast content as we project farther into the future. We find wide variety of results for the different macroeconomic quantities, with models for some quantities providing useful content several years into the future, for other quantities providing negligible content beyond one or two months or quarters.
    JEL: C53 E17
    Date: 2007
  4. By: Mishra, SK
    Abstract: Correlation matrices have many applications, particularly in marketing and financial economics - such as in risk management, option pricing and to forecast demand for a group of products in order to realize savings by properly managing inventories, etc. Various methods have been proposed by different authors to solve the nearest correlation matrix problem by majorization, hypersphere decomposition, semi-definite programming, or geometric programming, etc. In this paper we propose to obtain the nearest valid correlation matrix by the differential evaluation method of global optimization. We may draw some conclusions from the exercise in this paper. First, the ‘nearest correlation matrix problem may be solved satisfactorily by the evolutionary algorithm like the differential evolution method/Particle Swarm Optimizer. Other methods such as the Particle Swarm method also may be used. Secondly, these methods are easily amenable to choice of the norm to minimize. Absolute, Frobenius or Chebyshev norm may easily be used. Thirdly, the ‘complete the correlation matrix problem’ can be solved (in a limited sense) by these methods. Fourthly, one may easily opt for weighted norm or un-weighted norm minimization. Fifthly, minimization of absolute norm to obtain nearest correlation matrices appears to give better results. In solving the nearest correlation matrix problem the resulting valid correlation matrices are often near-singular and thus they are on the borderline of semi-negativity. One finds difficulty in rounding off their elements even at 6th or 7th places after decimal, without running the risk of making the rounded off matrix negative definite. Such matrices are, therefore, difficult to handle. It is possible to obtain more robust positive definite valid correlation matrices by constraining the determinant (the product of eigenvalues) of the resulting correlation matrix to take on a value significantly larger than zero. But this can be done only at the cost of a compromise on the criterion of ‘nearness.’ The method proposed by us does it very well.
    Keywords: Correlation matrix; product moment; nearest; complete; positive semi-definite; majorization; hypersphere decomposition; semi-definite programming; geometric programming; Particle Swarm; Differential Evolution; Particle Swarm Optimization; Global Optimization; risk management; option pricing; financial economics; marketing; computer program; Fortran; norm; absolute; maximum; Frobenius; Chebyshev; Euclidean.
    JEL: C63 G00 C88 C61 G19
    Date: 2007–04–14
  5. By: Espinosa Méndez, Christian
    Abstract: This article validates the chaotic behavior in the Argentinean, Brazilian, Canadian, Chilean, American, Peruvian and Mexican Stock Markets using the MERVAL, BOVESPA, S&P TSX COMPOSITE, IPSA, IGPA, S&P 500, DOW JONES INDUSTRIALS, NASDAQ, IGBVL and IPC Stock Indexes respectively. The results of different techniques and methods like: Graphic Analysis, Recurrence Analysis, Temporal Space Entropy, Hurst Coefficient, Lyapunov Exponential and Correlation Dimension support the hypothesis that the stock markets behave in a chaotic way and rejected the hypothesis of randomness. Our conclusion validates the use of prediction techniques in those stock markets. It’s remarkable the result of the Hurst Coefficient Technique, that in average was of 0,75 for the indexes of this study which would justify the use of ARFIMA models among others for the prediction of such series.
    Keywords: Chaos Theory; Recurrence Analysis; Temporal Space Entropy; Hurst Coefficient; Lyapunov Exponential; Correlation Dimension; BDS Test.
    JEL: G10 C14 G14 G15 C12
    Date: 2005–10–20

This nep-ets issue is ©2007 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.