
on Econometric Time Series 
By:  Christian M. Dahl; Emma M. Iglesias (School of Economics and Management, University of Aarhus, Denmark) 
Abstract:  In this paper we analyze the limiting properties of the estimated parameters in a general class of asymmetric volatility models which are closely related to the traditional exponential GARCH model. The new representation has three main advantages over the traditional EGARCH: (1) It allows a much more flexible representation of the conditional variance function. (2) It is possible to provide a complete characterization of the asymptotic distribution of the QML estimator based on the new class of nonlinear volatility models, something which has proven very difficult even for the traditional EGARCH. (3) It can produce asymmetric news impact curves where, contrary to the traditional EGARCH, the resulting variances do not excessively exceed the ones associated with the standard GARCH model, irrespectively of the sign of an impact of moderate size. Furthermore, the new class of models considered can create a wide array of news impact curves which provide the researcher with a richer choice set relative to the traditional. We also show in a Monte Carlo experiment the good finite sample performance of our asymptotic theoretical results and we compare them with those obtained from a parametric and the residual based bootstrap. Finally, we provide an empirical illustration. 
Keywords:  Asymmetric volatility models; Asymmetric news impact curves; Quasi maximum likelihood estimation; Asymptotic Theory; Bootstrap 
JEL:  C12 C13 C15 C22 C51 C52 E43 
Date:  2008–07–04 
URL:  http://d.repec.org/n?u=RePEc:aah:create:200838&r=ets 
By:  JeanMarie Dufour; Abderrahim Taamouti 
Abstract:  The concept of causality introduced by Wiener (1956) and Granger (1969) is defined in terms of predictability one period ahead. This concept can be generalized by considering causality at a given horizon h, and causality up to any given horizon h [Dufour and Renault (1998)]. This generalization is motivated by the fact that, in the presence of an auxiliary variable vector Z, it is possible that a variable Y does not cause variable X at horizon 1, but causes it at horizon h > 1. In this case, there is an indirect causality transmitted by Z. Another related problem consists in measuring the importance of causality between two variables. Existing causality measures have been defined only for the horizon 1 and fail to capture indirect causal effects. This paper proposes a generalization of such measures for any horizon h. We propose nonparametric and parametric measures of unidirectional and instantaneous causality at any horizon h. Parametric measures are defined in the context of autoregressive processes of unknown order and expressed in terms of impulse response coefficients. On noting that causality measures typically involve complex functions of model parameters in VAR and VARMA models, we propose a simple method to evaluate these measures which is based on the simulation of a large sample from the process of interest. We also describe asymptotically valid nonparametric confidence intervals, using a bootstrap technique. Finally, the proposed measures are applied to study causality relations at different horizons between macroeconomic, monetary and financial variables in the U.S. These results show that there is a strong effect of nonborrowed reserves on federal funds rate one month ahead, the effect of real gross domestic product on federal funds rate is economically important for the first three months, the effect of federal funds rate on gross domestic product deflator is economically weak one month ahead, and finally federal fundsrate causes the real gross domestic product until 16 months. 
Keywords:  Time series, Granger causality, Indirect causality, Multiple horizon causality, Causality measure, Predictability, Autoregressive model, Vector autoregression, VAR, Bootstrap, Monte Carlo, Macroeconomics, Money, Interest rates, Output, Inflation 
JEL:  C1 C12 C15 C32 C51 C53 E3 E4 E52 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:cte:werepe:we083720&r=ets 
By:  Strid, Ingvar (Stockholm School of Economics); Walentin, Karl (Research Department, Central Bank of Sweden) 
Abstract:  In this paper block Kalman filters for Dynamic Stochastic General Equilibrium models are presented and evaluated. Our approach is based on the simple idea of writing down the Kalman filter recursions on block form and appropriately sequencing the operations of the prediction step of the algorithm. It is argued that block filtering is the only viable serial algorithmic approach to significantly reduce Kalman filtering time in the context of large DSGE models. For the largest model we evaluate the block filter reduces the computation time by roughly a factor 2. Block filtering compares favourably with the more general method for faster Kalman filtering outlined by Koopman and Durbin (2000) and, furthermore, the two approaches are largely complementary 
Keywords:  Kalman filter; DSGE model; Bayesian estimation; Computational speed; Algorithm; Fortran; Matlab 
JEL:  C10 C60 
Date:  2008–06–01 
URL:  http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0224&r=ets 
By:  Massimiliano Serati (Cattaneo University (LIUC)); Gianni Amisano (Brescia University) 
Abstract:  One of the most problematic aspects in the work of policy makers and practitioners is having efficient forecasting tools combining two seemingly incompatible features: ease of use and completeness of the information set underlying the forecasts. Econometric literature provides different answers to these needs: Dynamic Factor Models (DFMs) optimally exploit the information coming from large datasets; composite leading indexes represent an immediate and flexible tool to anticipate the future evolution of a phenomenon. Curiously, the recent DFM literature has either ignored the construction of leading indexes or has made unsatisfactory choices as regards the criteria for aggregating the index components and the identification of factors that feed the index. This paper fills the gap and proposes a multistep procedure for building composite leading indexes within a DFM framework. Once selected the target economic variable and estimated a DFM based on a large targetoriented dataset, we identify the common factor shocks through sign restrictions on the impact multipliers and simulate the structural form of the model. The Forecast Error Variance Decompositions obtained over a k stepsahead simulation horizon define k sets of weights for aggregating factors (in a different way depending on the leading horizon) in order to get composite leading indexes. This procedure is used for a very preliminar empirical exercise aimed at forecasting crude nominal oil prices. The results seem to be encouraging and support the validity of the proposal: we generate a wide range of horizonspecific leading indexes with appreciable forecasting performances. 
Date:  2008–03 
URL:  http://d.repec.org/n?u=RePEc:liu:liucec:212&r=ets 