nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒08‒18
nine papers chosen by
Sune Karlsson
Orebro University

  1. Multivariate GARCH models By Silvennoinen, Annastiina; Teräsvirta, Timo
  2. A Bayesian approach to bandwidth selection for multivariate kernel regression with an application to state-price density estimation. By Xibin Zhang; Robert D Brooks; Maxwell L King
  3. Tests of equal predictive ability with real-time data By Todd E. Clark; Michael W. McCracken
  4. Estimating Quadratic Variation When Quoted Prices Change by a Constant Increment By Jeremy Large
  5. How Frequently Does the Stock Price Jump? – An Analysis of High-Frequency Data with Microstructure Noises By Jin-Chuan Duan; András Fülöp
  6. Specifying the Forecast Generating Process for Exchange Rate Survey Forecasts By Richard H. Cohen; Carl Bonham
  7. Conditional Beta- and Sigma-Convergence in Space: A Maximum Likelihood Approach By Michael Pfaffermayr
  8. Selection Bias in Web Surveys and the Use of Propensity Scores By Matthias Schonlau; Arthur Van Soest; Arie Kapteyn; Mick P. Couper
  9. The Feldstein-Horioka Puzzle: a Panel Smooth<br />Transition Regression Approach By Julien Fouquau; Christophe Hurlin; Isabelle Rabaud

  1. By: Silvennoinen, Annastiina (School of Finance and Economics, University of Technology, Sydney); Teräsvirta, Timo (CREATES, University of Aarhus and Department of Economic Statistics, Stockholm School of Economics)
    Abstract: This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes semiparametric and nonparametric GARCH models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example in which several multivariate GARCH models are fitted to the same data set and the results compared with each other.
    Keywords: autoregressive conditional heteroskedasticity; modelling volatility; nonlinear GARCH; nonparametric GARCH; semiparametric GARCH;
    JEL: C32 C52
    Date: 2007–06–15
    URL: http://d.repec.org/n?u=RePEc:hhs:hastef:0669&r=ecm
  2. By: Xibin Zhang; Robert D Brooks; Maxwell L King
    Abstract: Multivariate kernel regression is an important tool for investigating the relationship between a response and a set of explanatory variables. It is generally accepted that the performance of a kernel regression estimator largely depends on the choice of bandwidth rather than the kernel function. This nonparametric technique has been employed in a number of empirical studies including the state-price density estimation pioneered by Aït-Sahalia and Lo (1998). However, the widespread usefulness of multivariate kernel regression has been limited by the difficulty in computing a data-driven bandwidth. In this paper, we present a Bayesian approach to bandwidth selection for multivariate kernel regression. A Markov chain Monte Carlo algorithm is presented to sample the bandwidth vector and other parameters in a multivariate kernel regression model. A Monte Carlo study shows that the proposed bandwidth selector is more accurate than the rule-of-thumb bandwidth selector known as the normal reference rule according to Scott (1992) and Bowman and Azzalini (1997). The proposed bandwidth selection algorithm is applied to a multivariate kernel regression model that is often used to estimate the state-price density of Arrow-Debreu securities. When applying the proposed method to the S&P 500 index options and the DAX index options, we find that for short-maturity options, the proposed Bayesian bandwidth selector produces an obviously different state-price density from the one produced by using a subjective bandwidth selector discussed in Aït-Sahalia and Lo (1998).
    Keywords: Black-Scholes formula, Likelihood, Markov chain Monte Carlo, Posterior density.
    JEL: C11 C14 G12
    Date: 2007–08
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2007-11&r=ecm
  3. By: Todd E. Clark; Michael W. McCracken
    Abstract: This paper examines the asymptotic and finite-sample properties of tests of equal forecast accuracy applied to direct, multi-step predictions from both non-nested and nested linear regression models. In contrast to earlier work -- including West (1996), Clark and McCracken (2001, 2005),and McCracken (2006) -- our asymptotics take account of the real-time, revised nature of the data. Monte Carlo simulations indicate that our asymptotic approximations yield reasonable size and power properties in most circumstances. The paper concludes with an examination of the real-time predictive content of various measures of economic activity for inflation.
    Keywords: Forecasting
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:fip:fedkrw:rwp07-06&r=ecm
  4. By: Jeremy Large
    Abstract: For financial assets whose best quotes almost always change by jumping by the market`s price tick size (one cent, five cents, etc.), this paper proposes an estimator of Quadratic Variation which controls for microstructure effects. It measures the prevalence of alternations, where quotes jump back to their just-previous price. It defines a simple property called "uncorrelated alternation", which under conditions implies that the estimator is consistent in an asymptotic limit theory, where jumps become very frequent and small. Feasible limit theory is developed, and in simulations works well.
    Keywords: Realized Volatility, Realized Variance, Quadratic Variation, Market Microstructure, High-Frequency Data, Prue Jump Process
    JEL: C10 C22 C80
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:340&r=ecm
  5. By: Jin-Chuan Duan (Joseph L. Rotman School of Management, University of Toronto); András Fülöp (ESSEC Paris and CREST.)
    Abstract: The stock price is assumed to follow a jump-diffusion process which may exhibit time-varying volatilities. An econometric technique is then developed for this model and applied to high-frequency time series of stock prices that are subject to microstructure noises. Our method is based on first devising a localized particle filter and then employing fixed-lag smoothing in the Monte Carlo EM algorithm to perform the maximum likelihood estimation and inference. Using the intra-day IBM stock prices, we find that high-frequency data are crucial to disentangling frequent small jumps from infrequent large jumps. During the trading sessions, jumps are found to be frequent but small in magnitude, which is in sharp contrast to infrequent but large jumps when the market is closed. We also find that at the 5- or 10-minute sampling frequency, the conclusion will critically depend on whether heavy-tailed microstructure noises have been accounted for. Ignoring microstructure noises can, for example, lead to an overestimation of the jump intensity of 50% or more.
    Keywords: Particle filtering, jump-diffusion, maximum likelihood, EM-algorithm.
    JEL: C22
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:mnb:wpaper:2007/4&r=ecm
  6. By: Richard H. Cohen (College of Business and Public Policy, University of Alaska Anchorage); Carl Bonham (Department of Economics and University of Hawaii Economic Research Organization, University of Hawaii at Manoa)
    Abstract: This paper contributes to the literature on the modeling of survey forecasts using learning variables. We use individual industry data on yen-dollar exchange rate predictions at the two week, three month, and six month horizons supplied by the Japan Center for International Finance. Compared to earlier studies, our focus is not on testing a single type of learning model, whether univariate or mixed, but on searching over many types of learning models to determine if any are congruent. In addition to including the standard expectational variables (adaptive, extrapolative, and regressive), we also include a set of interactive variables which allow for lagged dependence of one industry’s forecast on the others. Our search produces a remarkably small number of congruent specifications-even when we allow for 1) a flexible lag specification, 2) endogenous break points and 3) an expansion of the initial list of regressors to include lagged dependent variables and use a General-to-Specific modeling strategy. We conclude that, regardless of forecasters’ ability to produce rational forecasts, they are not only “different,” but different in ways that cannot be adequately represented by learning models.
    Keywords: Learning Models, Exchange Rate, Survey Forecasts
    Date: 2007–07–25
    URL: http://d.repec.org/n?u=RePEc:hai:wpaper:200718&r=ecm
  7. By: Michael Pfaffermayr
    Abstract: Empirical work on regional growth under spatial spillovers uses two workhorse models: the spatial Solow model and Verdoorn's model. This paper contrasts these two views on regional growth processes and demonstrates that in a spatial setting the speed of convergence is heterogenous in both considered models, depending on the remoteness and the income gap of all regions. Furthermore, the paper introduces Wald tests for conditional spatial sigma-convergence based on a spatial maximum likelihood approach. Empirical estimates for 212 European regions covering the period 1980-2002 reveal a slow speed of convergence of about 0.7 percent per year under both models. However, pronounced heterogeneity in the convergence speed is evident. The Wald tests indicate significant conditional spatial sigma-convergence of about 2 percent per year under the spatial Solow model. Verdoorn's specification points to a smaller and insignificant variance reduction during the considered period.
    Keywords: Conditional spatial Beta- and Sigma-convergence; Spatial Solow model; Verdoorn's model; Spatial maximum likelihood estimates; European regions
    JEL: R11 C21 O47
    Date: 2007–08
    URL: http://d.repec.org/n?u=RePEc:inn:wpaper:2007-17&r=ecm
  8. By: Matthias Schonlau; Arthur Van Soest; Arie Kapteyn; Mick P. Couper
    Abstract: Web surveys have several advantages compared to more traditional surveys with in-person interviews, telephone interviews, or mail surveys. Their most obvious potential drawback is that they may not be representative of the population of interest because the sub-population with access to Internet is quite specific. This paper investigates propensity scores as a method for dealing with selection bias in web surveys. The authors' main example has an unusually rich sampling design, where the Internet sample is drawn from an existing much larger probability sample that is representative of the US 50+ population and their spouses (the Health and Retirement Study). They use this to estimate propensity scores and to construct weights based on the propensity scores to correct for selectivity. They investigate whether propensity weights constructed on the basis of a relatively small set of variables are sufficient to correct the distribution of other variables so that these distributions become representative of the population. If this is the case, information about these other variables could be collected over the Internet only. Using a backward stepwise regression they find that at a minimum all demographic variables are needed to construct the weights. The propensity adjustment works well for many but not all variables investigated. For example, they find that correcting on the basis of socio-economic status by using education level and personal income is not enough to get a representative estimate of stock ownership. This casts some doubt on the common procedure to use a few basic variables to blindly correct for selectivity in convenience samples drawn over the Internet. Alternatives include providing non-Internet users with access to the Web or conducting web surveys in the context of mixed mode surveys.
    Keywords: surveys, methodology, computer programs
    JEL: C42 C80
    Date: 2006–04
    URL: http://d.repec.org/n?u=RePEc:ran:wpaper:279&r=ecm
  9. By: Julien Fouquau (LEO - Laboratoire d'économie d'Orleans - [CNRS : UMR6221] - [Université d'Orléans]); Christophe Hurlin (LEO - Laboratoire d'économie d'Orleans - [CNRS : UMR6221] - [Université d'Orléans]); Isabelle Rabaud (LEO - Laboratoire d'économie d'Orleans - [CNRS : UMR6221] - [Université d'Orléans])
    Abstract: This paper proposes an original framework to determine the relative influence of five<br />factors on the Feldstein and Horioka result of OECD countries with a strong saving-<br />investment association. Based on panel threshold regression models, we establish<br />country-specific and time-specific saving retention coefficients for 24 OECD coun-<br />tries over the period 1960-2000. These coefficients are assumed to change smoothly,<br />as a function of five threshold variables, considered as the most important in the<br />literature devoted to the Feldstein and Horioka puzzle. The results show that; de-<br />gree of openness, country size and current account to GDP ratios have the greatest<br />influence on the investment-saving relationship.
    Keywords: Feldstein Horioka puzzle, Panel Smooth Threshold Regression models,<br />saving-investment association, capital mobility .
    Date: 2007–08–03
    URL: http://d.repec.org/n?u=RePEc:hal:papers:halshs-00156688_v2&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.