
on Econometric Time Series 
By:  Hendry, David F; Hubrich, Kirstin 
Abstract:  We explore whether forecasting an aggregate variable using information on its disaggregate components can improve the prediction mean squared error over first forecasting the disaggregates and then aggregating those forecasts, or, alternatively, over using only lagged aggregate information in forecasting the aggregate. We show theoretically that the first method of forecasting the aggregate should outperform the alternative methods in population. We investigate whether this theoretical prediction can explain our empirical findings and analyse why forecasting the aggregate using information on its disaggregate components improves forecast accuracy of the aggregate forecast of euro area and US inflation in some situations, but not in others. 
Keywords:  disaggregate information; factor models; forecast model selection; predictability; VAR 
JEL:  C51 C53 E31 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5485&r=ets 
By:  Markku Lanne; Pentti Saikkonen 
Abstract:  In this paper we propose a new GARCHinMean (GARCHM) model allowing for conditional skewness. The model is based on the socalled z distribution capable of modeling moderate skewness and kurtosis typically encountered in stock return series. The need to allow for skewness can also be readily tested. Our empirical results indicate the presence of conditional skewness in the postwar U.S. stock returns. Small positive news is also found to have a smaller impact on conditional variance than no news at all. Moreover, the symmetric GARCHM model not allowing for conditional skewness is found to systematically overpredict conditional variance and average excess returns. 
Keywords:  Conditional skewness, GARCHinMean, Riskreturn tradeoff 
JEL:  C16 C22 G12 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2005/14&r=ets 
By:  Stefanie A. Haller 
Abstract:  We model the impact of different modes of multinational entry on the choices of domestic firms. Focusing on the competitive effects of foreign presence in the host country we demonstrate that greenfield investment will increase competition only if it is not countered by anticompetitive reactions on the part of the domestic firms. Considering also crossborder mergers and acquisitions the model, thus, provides two alternative explanations for the increase in concentration ratios in industries with mostly horizontal foreign direct investment. Moreover, foreign presence is shown to raise total investment in the local industry at the cost of crowding out domestic R&D. 
Keywords:  greenfield investment, crossborder mergers and acquisitions, hostcountry effects, market structure, costreducing R&D investment 
JEL:  F23 L11 O31 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2005/16&r=ets 
By:  Markku Lanne; Helmut Luetkepohl 
Abstract:  In structural vector autoregressive (SVAR) models identifying restrictions for shocks and impulse responses are usually derived from economic theory or institutional constraints. Sometimes the restrictions are insufficient for identifying all shocks and impulse responses. In this paper it is pointed out that specific distributional assumptions can also help in identifying the structural shocks. In particular, a mixture of normal distributions is considered as a plausible model that can be used in this context. Our model setup makes it possible to test restrictions which are justidentifying in a standard SVAR framework. In particular, we can test for the number of transitory and permanent shocks in a cointegrated SVAR model. The results are illustrated using a data set from King, Plosser, Stock and Watson (1991) and a system of US and European interest rates.ClassificationJEL: C32 
Keywords:  Mixture normal distribution, cointegration, vector autoregressive process, vector error correction model, impulse responses 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2005/25&r=ets 
By:  Timothy Halliday (Department of Economics, University of Hawaii at Manoa; John A. Burns School of Medicine, University of Hawaii at Manoa) 
Abstract:  We consider the identification of state dependence in a nonstationary process of binary outcomes within the context of the dynamic logit model with timevariant transition probabilities and an arbitrary distribution for the unobserved heterogeneity. We derive a simple identification result that allows us to calculate a test for state dependence in this model. We also consider alternative tests for state dependence that will have desirable properties only in stationary processes and derive their asymptotic properties when the true underlying process is nonstationary. Finally, we provide Monte Carlo evidence that shows a range of nonstationarity in which the effects of misspecifying the binary process as stationary are not too large. 
Keywords:  Dynamic Panel Data Models, State Dependence, NonStationary Processes 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:hai:wpaper:200601&r=ets 
By:  Kazuhiko Hayakawa 
Abstract:  This paper complements Alvarez and Arellano (2003) by showing the asymptotic properties of the system GMM estimator for AR(1) panel data models when both N and T tend to infinity. We show that the system GMM estimator with the instruments which Blundell and Bond (1998) used will be inconsistent when both N and T are large. We also show that the system GMM estimator with all available instruments, including redundant ones, will be consistent if ƒÐ<sub>ƒÅ</sub><sup>2</sup>/ƒÐ<sub>v</sub><sup>2</sup> = 1ƒ¿ holds. 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:hst:hstdps:d05129&r=ets 
By:  Kazuhiko Hayakawa 
Abstract:  This paper addresses the many instruments problem, i.e. (1) the tradeoff between the bias and the efficiency of the GMM estimator, and (2) inaccuracy of inference, in dynamic panel data models where unobservable heterogeneity may be large. We find that if we use all the instruments in levels, although the GMM estimator is robust to large heterogeneity, inference is inaccurate. In contrast, if we use the minimum number of instruments in levels in the sense that we use only one instrument for each period, the performance of the GMM estimator is heavily affected by the degree of heterogeneity, that is, both the asymptotic bias and the variance are proportional to the magnitude of heterogeneity. To address this problem, we propose a new form of instruments that are obtained from the socalled backward orthogonal deviation transformation. The asymptotic analysis shows that the GMM estimator with the minimum number of new instruments has smaller asymptotic bias than the estimators typically used such as the GMM estimator with all instruments in levels, the LIML estimators and the withingroups estimators, while the asymptotic variance of the proposed estimator is equal to the lower bound. Thus both the asymptotic bias and the variance of the proposed estimators become small simultaneously. Simulation results show that our new GMM estimator outperforms the conventional GMM estimator with all instruments in levels in term of the RMSE and in terms of accuracy of inference. An empirical application with Spanish firm data is also provided. 
Keywords:  Dynamic panel data, many instruments, generalized method of moments estimator, unobservable large heterogeneity 
JEL:  C23 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:hst:hstdps:d05130&r=ets 
By:  Simone Manganelli (European Central Bank, Kaiserstrasse 29, Postfach 16 03 19, 60066 Frankfurt am Main, Germany.) 
Abstract:  This paper argues that forecast estimators should minimise the loss function in a statistical, rather than deterministic, way. We introduce two new elements into the classical econometric analysis: a subjective guess on the variable to be forecasted and a probability reflecting the confidence associated to it. We then propose a new forecast estimator based on a test of whether the first derivatives of the loss function evaluated at the subjective guess are statistically different from zero. We show that the classical estimator is a special case of this new estimator, and that in general the two estimators are asymptotically equivalent. We illustrate the implications of this new theory with a simple simulation, an application to GDP forecast and an example of meanvariance portfolio selection. 
Keywords:  Decision under uncertainty; estimation; overfitting; asset allocation 
JEL:  C13 C53 G11 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20060584&r=ets 
By:  Ralf Brüggemann; Wolfgang Härdle; Julius Mungo; Carsten Trenkler 
Abstract:  The implied volatility of a European option as a function of strike price and time to maturity forms a volatility surface. Traders price according to the dynamics of this high dimensional surface. Recent developments that employ semiparametric models approximate the implied volatility surface (IVS) in a finite dimensional function space, allowing for a low dimensional factor representation of these dynamics. This paper presents an investigation into the stochastic properties of the factor loading times series using the vector autoregressive (VAR) framework and analyzes associated movements of these factors with movements in some macroeconomic variables of the Euro  economy. 
Keywords:  Implied volatility surface, dynamic semiparametric factor model, unit root tests, vector autoregression, impulse responses 
JEL:  C14 C32 
Date:  2006–02 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2006011&r=ets 
By:  James MacKinnon (Queen's University) 
Abstract:  The fast double bootstrap, or FDB, is a procedure for calculating bootstrap P values that is much more computationally efficient than the double bootstrap itself. In many cases, it can provide more accurate results than ordinary bootstrap tests. For the fast double bootstrap to be valid, the test statistic must be asymptotically independent of the random parts of the bootstrap data generating process. This paper presents simulation evidence on the performance of FDB tests in three cases of interest to econometricians. One of the cases involves both symmetric and equaltail bootstrap tests, which, interestingly, can have quite different power properties. Another highlights the importance of imposing the null hypothesis on the bootstrap DGP. 
Keywords:  bootstrap test, serial correlation, ARCH errors, weak instruments, double bootstrap 
JEL:  C12 C15 
Date:  2006–02 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1023&r=ets 
By:  Gianni Amisano; Raffaella Giacomini 
Abstract:  We propose a test for comparing the outofsample accuracy of competing density forecasts of a variable. The test is valid under general conditions: the data can be heterogeneous and the forecasts can be based on (nested or nonnested) parametric models or produced by semi parametric, nonparametric or Bayesian estimation techniques. The evaluation is based on scoring rules, which are loss functions de?ned over the density forecast and the realizations of the variable. We restrict attention to the logarithmic scoring rule and propose an outofsample ?weighted likelihood ratio?test that compares weighted averages of the scores for the competing forecasts. The userde?ned weights are a way to focus attention on di¤erent regions of the distribution of the variable. For a uniform weight function, the test can be interpreted as an extension of Vuong (1989)?s likelihood ratio test to time series data and to an outofsample testing framework. We apply the tests to evaluate density forecasts of US in?ation produced by linear and Markov Switching Phillips curve models estimated by either maximum likelihood or Bayesian methods. We conclude that a Markov Switching Phillips curve estimated by maximum likelihood produces the best density forecasts of in?ation. 
URL:  http://d.repec.org/n?u=RePEc:ubs:wpaper:ubs0504&r=ets 