nep-ecm New Economics Papers
on Econometrics
Issue of 2006‒07‒15
twenty papers chosen by
Sune Karlsson
Orebro University

  1. Efficient estimation of the semiparametric spatial autoregressive model By Peter Robinson
  2. Nonparametric instrumental variables estimation of a quantile regression model By Joel Horowitz; Simon Lee
  3. Forcasting in large cointegrated processes By Hiroaki Chigira; Taku Yamamoto
  4. Bootstrapping Neural tests for conditional heteroskedasticity By Carole Siani
  5. Graphical Methods for Investigating the Finite-sample Properties of Confidence Regions: an application to long memory By Christian de Peretti
  6. Robust Econometrics By Pavel Cizek; Wolfgang Härdle
  7. Inference in GARCH when some coefficients are equal to zero By Christian Francq
  8. Forecasting Irregularly Spaced UHF Financial Data: Realized Volatility vs UHF-GARCH Models By Francois-Éric Racicot; Raymond Théoret; Alain Coen
  9. Breaking trend panel unit root tests By Pui Sun Tam
  10. Back to square one: identification issues in DSGE models By Luca Sala; Fabio Canova; UPF
  11. Testing for Structural Breaks and other forms of Non-stationarity: a Misspecification Perspective By Maria Heracleous
  12. Identification and estimation of latent attitudes and their behavioral implications By Richard Spady
  13. Forecasting Financial Crises and Contagion in Asia using Dynamic Factor Analysis By Andrea Cipollini
  14. Forecasting the Term Structure of Variance Swaps By Kai Detlefsen; Wolfgang Härdle
  15. Beveridge-Nelson Decomposition with Markov Switching By Chin Nam Low; Heather Anderson; Ralph Snyder
  16. The Relation of Different Concepts of Causality in Econometrics By Michael Lechner
  17. A Dynamic Tobit Model for the Open Market Desk's Daily Reaction Function By George Monokroussos
  18. Testing Financial Integration: Finite Sample Motivated Mothods By Marie-Claude Beaulieu; Marie-Hélène Gagnon
  19. Forecasting Inflation: the Relevance of Higher Moments By Jane M. Binner
  20. The predictive power of the present value model of stock prices By Geraldine Ryan

  1. By: Peter Robinson (Institute for Fiscal Studies and London School of Economics)
    Abstract: Efficient semiparametric and parametric estimates are developed for a spatial autoregressive model, containing non stochastic explanatory variables and innovations suspected to be non-normal. The main stress is on the case of distribution of unknown, nonparametric, form, where series non parametric estimates of the score function are employed inadaptive estimates of parameters of interest. These estimates are as efficient as ones based on a correct form, in particular they are more effcient than pseudo-Gaussian maximum likelihood estimates at non-Gaussian distributions. Two different adaptive estimates are considered.Oneentailsastringentcondition on the spatial weight matrix,and is suitable only when observations have substantially many "neighbours". The other adaptive estimate relaxes this requirement, at the expense of alternative conditions and possible computational expense.A Monte Carlo study of finite sample performance is included.
    Keywords: Spatial autoregression; Efficient estimation; Adaptive estimation; Simultaneity bias.
    JEL: C13 C14 C21
    Date: 2006–05
  2. By: Joel Horowitz (Institute for Fiscal Studies and Northwestern University); Simon Lee (Institute for Fiscal Studies and University College London)
    Abstract: We consider nonparametric estimation of a regression function that is identified by requiring a specified quantile of the regression "error" conditional on an instrumental variable to be zero. The resulting estimating equation is a nonlinear integral equation of the first kind, which generates an ill-posed-inverse problem. The integral operator and distribution of the instrumental variable are unknown and must be estimated nonparametrically. We show that the estimator is mean-square consistent, derive its rate of convergence in probability, and give conditions under which this rate is optimal in a minimax sense. The results of Monte Carlo experiments show that the estimator behaves well in finite samples.
    Keywords: Statistical inverse, endogenous variable, instrumental variable, optimal rate, nonlinear integral equation, nonparametric regression
    JEL: C13 C31
    Date: 2006–06
  3. By: Hiroaki Chigira; Taku Yamamoto
    Abstract: It is widely recognized that taking cointegration relationships into consideration is useful in forecasting cointegrated processes. However, there are a few practical problems when forecasting large cointegrated processes using the well-known vector error correction model. First, it is hard to identify the cointegration rank in large models. Second, since the number of parameters to be estimated tends to be large relative to the sample size in large models, estimators will have large standard errors, and so will forecasts. The purpose of the present paper is to propose a new procedure for forecasting large cointegrated processes, which is free from the above problems. In our Monte Carlo experiment, we find that our forecast gains accuracy when we work with a larger model as long as the ratio of the cointegration rank to the number of variables in the process is high.
    Keywords: Forcasting, Cointegration, Large Models
    JEL: C12 C32
    Date: 2006–06
  4. By: Carole Siani (University of Lyon 1)
    Keywords: Bootstrap, Artificial Neural Networks, ARCH models, inference tests
    JEL: C14 C15 C45
    Date: 2006–07–04
  5. By: Christian de Peretti (Department of Economics University of Evry-Val-d'Essonne)
    Keywords: Graphical method, confidence region, long memory, double bootstrap, inverting tests
    JEL: C14 C15 C63
    Date: 2006–07–04
  6. By: Pavel Cizek; Wolfgang Härdle
    Abstract: Econometrics often deals with data under, from the statistical point of view, non-standard conditions such as heteroscedasticity or measurement errors and the estimation methods need thus be either adopted to such conditions or be at least insensitive to them. The methods insensitive to violation of certain assumptions, for example insensitive to the presence of heteroscedasticity, are in a broad sense referred to as robust (e.g., to heteroscedasticity). On the other hand, there is also a more specific meaning of the word `robust`, which stems from the field of robust statistics. This latter notion defines robustness rigorously in terms of behavior of an estimator both at the assumed (parametric) model and in its neighborhood in the space of probability distributions. Even though the methods of robust statistics have been used only in the simplest setting such as estimation of location, scale, or linear regression for a long time, they motivated a range of new econometric methods recently, which we focus on in this chapter.
    Date: 2006–06
  7. By: Christian Francq
    Keywords: C12, C13, C22
    JEL: C13 C12 C22
    Date: 2006–07–04
  8. By: Francois-Éric Racicot (Département des sciences administratives, Université du Québec (Outaouais) et LRSP); Raymond Théoret (Département de stratégie des affaires, Université du Québec (Montréal)); Alain Coen (Département de stratégie des affaires, Université du Québec (Montréal))
    Abstract: A very promising literature has been recently devoted to the modeling of ultra-high-frequency (UHF) data. Our first aim is to develop an empirical application of Autoregressive Conditional Duration GARCH models and the realized volatility to forecast future volatilities on irregularly spaced data. We also compare the out sample performances of ACD GARCH models with the realized volatility method. We propose a procedure to take into account the time deformation and show how to use these models for computing daily VaR.
    Keywords: Realized volatility, Ultra High Frequency GARCH, time deformation, financial markets, Daily VaR.
    JEL: C22 C53 G14
    Date: 2006–07–06
  9. By: Pui Sun Tam (University of Macau)
    Keywords: Panel unit root, structural breaks, response surface, bootstrap
    JEL: C12 C15 C23
    Date: 2006–07–04
  10. By: Luca Sala (Università Bocconi, IGIER); Fabio Canova; UPF
    Keywords: identification, dsge models
    JEL: C13 C51 C52
    Date: 2006–07–04
  11. By: Maria Heracleous (American University)
    Keywords: Maximum Entropy Bootstrap, Non-Stationarity,
    Date: 2006–07–04
  12. By: Richard Spady (Institute for Fiscal Studies and European University Institute)
    Abstract: This paper (i) formalizes conditions under which a population distribution of categorical responses to attitudinal questions (‘items’) has a scale representation; (ii) develops tests for whether a particular sample of item responses is consistent with a scale representation; (iii) develops methods for nonparametrically estimating the relation between an outcome and a scale value; and (iv) generalizes the foregoing to the multi-scale case. An implication of these results is that the effect of multiple latent attitudes on behaviour can be identified, even though the attitudes of an individual can never be precisely observed. We illustrate our methods using survey data from the 1992 U.S. Presidential election, where the ‘outcome’ is an individual’s vote and the ‘items’ are expressions of social and policy preferences.
    Date: 2006–06
  13. By: Andrea Cipollini (University of Essex)
    Keywords: Financial Contagion, Dynamic Factor Model, Stochastic Simulation
    JEL: C32 C51 F34
    Date: 2006–07–04
  14. By: Kai Detlefsen; Wolfgang Härdle
    Abstract: Recently, Diebold and Li (2003) obtained good forecasting results for yield curves in a reparametrized Nelson-Siegel framework. We analyze similar modeling approaches for price curves of variance swaps that serve nowadays as hedging instruments for options on realized variance. We consider the popular Heston model, reparametrize its variance swap price formula and model the entire variance swap curves by two exponential factors whose loadings evolve dynamically on a weekly basis. Generalizing this approach we consider a reparametrization of the three-dimensional Nelson-Siegel factor model. We show that these factors can be interpreted as level, slope and curvature and how they can be estimated directly from characteristic points of the curves. Moreover, we analyze a semiparametric factor model. Estimating autoregressive models for the factor loadings we get termstructure forecasts that we compare in addition to the random walk and the static Heston model that is often used in industry. In contrast to the results of Diebold and Li (2003) on yield curves, no model produces better forecasts of variance swap curves than the random walk but forecasting the Heston model improves the popular static Heston model. Moreover, the Heston model is better than the flexible semiparametric approach that outperforms the Nelson-Siegel model.
    Keywords: Term structure, Variance swap curve, Heston model, Nelson-Siegel curve, Semiparametric factor model
    JEL: G1 D4 C5
    Date: 2006–07
  15. By: Chin Nam Low (Melbourne Institute of Applied Economic and Social Research, The University of Melbourne); Heather Anderson (Australia National University); Ralph Snyder (Monash University)
    Abstract: In this paper, we consider the introduction of Markov-switching (MS) processes to both the permanent and transitory components of the Beveridge-Nelson (BN) decomposition. This new class of MS models within the context of BN decomposition provides an alternative framework in the study of business cycle asymmetry. Our approach incorporates Markov switching into a BN decomposition formulated in a single source of error state-space form, allowing regime switches in the long-run multiplier as well as in the short-run parameters.
    JEL: C22 C51 E32
    Date: 2006–07
  16. By: Michael Lechner
    Abstract: Granger and Sims non-causality (GSNC) are compared to non-causality based on concepts popular in the microeconometrics and programme evaluation literature (potential outcome non-causality, PONC). GSNC is defined as a set of restrictions on joint distributions of random variables with observable sample counterparts, whereas PONC combines restrictions on partially unobservable variables (potential outcomes) with different identifying assumptions that relate potential to observable outcomes. Based on a dynamic model of potential outcomes, we find that in general neither of the concepts implies each other without further assumptions. However, identifying assumptions of the sequential selection on observable type provide the link between those concepts, such that GSNC implies PONC, and vice versa.
    Keywords: Granger causality, Sims causality, Rubin causality, potential outcome model, dynamic treatments
    JEL: C21 C22 C23
    Date: 2006–06
  17. By: George Monokroussos (Economics University at Albany, SUNY)
    Keywords: Reserves, Federal Funds Rate, Open Market Operations, Open Market Desk, Censored Models, Data Augmentation, Markov Chain Monte Carlo, Gibbs Sampling, Time-Varying Parameter Models
    JEL: C15 C22 C24 E4
    Date: 2006–07–04
  18. By: Marie-Claude Beaulieu (Université Laval); Marie-Hélène Gagnon (Université Laval)
    Keywords: market integration, finite sample methods
    Date: 2006–07–04
  19. By: Jane M. Binner (Aston University)
    Keywords: relative price distribution, higher moments, out-of-sample inflation forecasting
    JEL: C22 C43 E27
    Date: 2006–07–04
  20. By: Geraldine Ryan (Economics University College Cork)
    Keywords: Present Value Model of Stock Prices; Nonlinear Unit Root Tests; Nonlinear Cointegration Tests; ESTAR- EGARCH model; Long Horizon Predictability Tests.
    JEL: G12 G14 C53
    Date: 2006–07–04

This nep-ecm issue is ©2006 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.