
on Econometrics 
By:  Silvennoinen, Annastiina (School of Finance and Economics, University of Technology, Sydney); Teräsvirta, Timo (CREATES, University of Aarhus and Department of Economic Statistics, Stockholm School of Economics) 
Abstract:  This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes semiparametric and nonparametric GARCH models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example in which several multivariate GARCH models are fitted to the same data set and the results compared with each other. 
Keywords:  autoregressive conditional heteroskedasticity; modelling volatility; nonlinear GARCH; nonparametric GARCH; semiparametric GARCH; 
JEL:  C32 C52 
Date:  2007–06–15 
URL:  http://d.repec.org/n?u=RePEc:hhs:hastef:0669&r=ecm 
By:  Xibin Zhang; Robert D Brooks; Maxwell L King 
Abstract:  Multivariate kernel regression is an important tool for investigating the relationship between a response and a set of explanatory variables. It is generally accepted that the performance of a kernel regression estimator largely depends on the choice of bandwidth rather than the kernel function. This nonparametric technique has been employed in a number of empirical studies including the stateprice density estimation pioneered by AïtSahalia and Lo (1998). However, the widespread usefulness of multivariate kernel regression has been limited by the difficulty in computing a datadriven bandwidth. In this paper, we present a Bayesian approach to bandwidth selection for multivariate kernel regression. A Markov chain Monte Carlo algorithm is presented to sample the bandwidth vector and other parameters in a multivariate kernel regression model. A Monte Carlo study shows that the proposed bandwidth selector is more accurate than the ruleofthumb bandwidth selector known as the normal reference rule according to Scott (1992) and Bowman and Azzalini (1997). The proposed bandwidth selection algorithm is applied to a multivariate kernel regression model that is often used to estimate the stateprice density of ArrowDebreu securities. When applying the proposed method to the S&P 500 index options and the DAX index options, we find that for shortmaturity options, the proposed Bayesian bandwidth selector produces an obviously different stateprice density from the one produced by using a subjective bandwidth selector discussed in AïtSahalia and Lo (1998). 
Keywords:  BlackScholes formula, Likelihood, Markov chain Monte Carlo, Posterior density. 
JEL:  C11 C14 G12 
Date:  2007–08 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:200711&r=ecm 
By:  Todd E. Clark; Michael W. McCracken 
Abstract:  This paper examines the asymptotic and finitesample properties of tests of equal forecast accuracy applied to direct, multistep predictions from both nonnested and nested linear regression models. In contrast to earlier work  including West (1996), Clark and McCracken (2001, 2005),and McCracken (2006)  our asymptotics take account of the realtime, revised nature of the data. Monte Carlo simulations indicate that our asymptotic approximations yield reasonable size and power properties in most circumstances. The paper concludes with an examination of the realtime predictive content of various measures of economic activity for inflation. 
Keywords:  Forecasting 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp0706&r=ecm 
By:  Jeremy Large 
Abstract:  For financial assets whose best quotes almost always change by jumping by the market`s price tick size (one cent, five cents, etc.), this paper proposes an estimator of Quadratic Variation which controls for microstructure effects. It measures the prevalence of alternations, where quotes jump back to their justprevious price. It defines a simple property called "uncorrelated alternation", which under conditions implies that the estimator is consistent in an asymptotic limit theory, where jumps become very frequent and small. Feasible limit theory is developed, and in simulations works well. 
Keywords:  Realized Volatility, Realized Variance, Quadratic Variation, Market Microstructure, HighFrequency Data, Prue Jump Process 
JEL:  C10 C22 C80 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:oxf:wpaper:340&r=ecm 
By:  JinChuan Duan (Joseph L. Rotman School of Management, University of Toronto); András Fülöp (ESSEC Paris and CREST.) 
Abstract:  The stock price is assumed to follow a jumpdiffusion process which may exhibit timevarying volatilities. An econometric technique is then developed for this model and applied to highfrequency time series of stock prices that are subject to microstructure noises. Our method is based on first devising a localized particle filter and then employing fixedlag smoothing in the Monte Carlo EM algorithm to perform the maximum likelihood estimation and inference. Using the intraday IBM stock prices, we find that highfrequency data are crucial to disentangling frequent small jumps from infrequent large jumps. During the trading sessions, jumps are found to be frequent but small in magnitude, which is in sharp contrast to infrequent but large jumps when the market is closed. We also find that at the 5 or 10minute sampling frequency, the conclusion will critically depend on whether heavytailed microstructure noises have been accounted for. Ignoring microstructure noises can, for example, lead to an overestimation of the jump intensity of 50% or more. 
Keywords:  Particle filtering, jumpdiffusion, maximum likelihood, EMalgorithm. 
JEL:  C22 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:mnb:wpaper:2007/4&r=ecm 
By:  Richard H. Cohen (College of Business and Public Policy, University of Alaska Anchorage); Carl Bonham (Department of Economics and University of Hawaii Economic Research Organization, University of Hawaii at Manoa) 
Abstract:  This paper contributes to the literature on the modeling of survey forecasts using learning variables. We use individual industry data on yendollar exchange rate predictions at the two week, three month, and six month horizons supplied by the Japan Center for International Finance. Compared to earlier studies, our focus is not on testing a single type of learning model, whether univariate or mixed, but on searching over many types of learning models to determine if any are congruent. In addition to including the standard expectational variables (adaptive, extrapolative, and regressive), we also include a set of interactive variables which allow for lagged dependence of one industry’s forecast on the others. Our search produces a remarkably small number of congruent specificationseven when we allow for 1) a flexible lag specification, 2) endogenous break points and 3) an expansion of the initial list of regressors to include lagged dependent variables and use a GeneraltoSpecific modeling strategy. We conclude that, regardless of forecasters’ ability to produce rational forecasts, they are not only “different,” but different in ways that cannot be adequately represented by learning models. 
Keywords:  Learning Models, Exchange Rate, Survey Forecasts 
Date:  2007–07–25 
URL:  http://d.repec.org/n?u=RePEc:hai:wpaper:200718&r=ecm 
By:  Michael Pfaffermayr 
Abstract:  Empirical work on regional growth under spatial spillovers uses two workhorse models: the spatial Solow model and Verdoorn's model. This paper contrasts these two views on regional growth processes and demonstrates that in a spatial setting the speed of convergence is heterogenous in both considered models, depending on the remoteness and the income gap of all regions. Furthermore, the paper introduces Wald tests for conditional spatial sigmaconvergence based on a spatial maximum likelihood approach. Empirical estimates for 212 European regions covering the period 19802002 reveal a slow speed of convergence of about 0.7 percent per year under both models. However, pronounced heterogeneity in the convergence speed is evident. The Wald tests indicate significant conditional spatial sigmaconvergence of about 2 percent per year under the spatial Solow model. Verdoorn's specification points to a smaller and insignificant variance reduction during the considered period. 
Keywords:  Conditional spatial Beta and Sigmaconvergence; Spatial Solow model; Verdoorn's model; Spatial maximum likelihood estimates; European regions 
JEL:  R11 C21 O47 
Date:  2007–08 
URL:  http://d.repec.org/n?u=RePEc:inn:wpaper:200717&r=ecm 
By:  Matthias Schonlau; Arthur Van Soest; Arie Kapteyn; Mick P. Couper 
Abstract:  Web surveys have several advantages compared to more traditional surveys with inperson interviews, telephone interviews, or mail surveys. Their most obvious potential drawback is that they may not be representative of the population of interest because the subpopulation with access to Internet is quite specific. This paper investigates propensity scores as a method for dealing with selection bias in web surveys. The authors' main example has an unusually rich sampling design, where the Internet sample is drawn from an existing much larger probability sample that is representative of the US 50+ population and their spouses (the Health and Retirement Study). They use this to estimate propensity scores and to construct weights based on the propensity scores to correct for selectivity. They investigate whether propensity weights constructed on the basis of a relatively small set of variables are sufficient to correct the distribution of other variables so that these distributions become representative of the population. If this is the case, information about these other variables could be collected over the Internet only. Using a backward stepwise regression they find that at a minimum all demographic variables are needed to construct the weights. The propensity adjustment works well for many but not all variables investigated. For example, they find that correcting on the basis of socioeconomic status by using education level and personal income is not enough to get a representative estimate of stock ownership. This casts some doubt on the common procedure to use a few basic variables to blindly correct for selectivity in convenience samples drawn over the Internet. Alternatives include providing nonInternet users with access to the Web or conducting web surveys in the context of mixed mode surveys. 
Keywords:  surveys, methodology, computer programs 
JEL:  C42 C80 
Date:  2006–04 
URL:  http://d.repec.org/n?u=RePEc:ran:wpaper:279&r=ecm 
By:  Julien Fouquau (LEO  Laboratoire d'économie d'Orleans  [CNRS : UMR6221]  [Université d'Orléans]); Christophe Hurlin (LEO  Laboratoire d'économie d'Orleans  [CNRS : UMR6221]  [Université d'Orléans]); Isabelle Rabaud (LEO  Laboratoire d'économie d'Orleans  [CNRS : UMR6221]  [Université d'Orléans]) 
Abstract:  This paper proposes an original framework to determine the relative influence of five<br />factors on the Feldstein and Horioka result of OECD countries with a strong saving<br />investment association. Based on panel threshold regression models, we establish<br />countryspecific and timespecific saving retention coefficients for 24 OECD coun<br />tries over the period 19602000. These coefficients are assumed to change smoothly,<br />as a function of five threshold variables, considered as the most important in the<br />literature devoted to the Feldstein and Horioka puzzle. The results show that; de<br />gree of openness, country size and current account to GDP ratios have the greatest<br />influence on the investmentsaving relationship. 
Keywords:  Feldstein Horioka puzzle, Panel Smooth Threshold Regression models,<br />savinginvestment association, capital mobility . 
Date:  2007–08–03 
URL:  http://d.repec.org/n?u=RePEc:hal:papers:halshs00156688_v2&r=ecm 