nep-ecm New Economics Papers
on Econometrics
Issue of 2005‒10‒29
twelve papers chosen by
Sune Karlsson
Orebro University

  1. Mean-square-error Calculations for Average Treatment Effects By Guido W. Imbens; Whitney Newey; Geert Ridder
  2. Density Forecast Combination By Stephen Hall; James Mitchell
  3. Optimal combination of density forecasts By Stephen Hall; James Mitchell
  4. Panel Smooth Transition Regression Models By Andres Gonzalez; Timo Terasvirta; Dick van Dijk
  5. Unit Roots and Cointegration in Panels By Jörg Breitung; M. Hashem Pesaran
  6. The Limiting Power of Autocorrelation Tests in Regression Models with Linear Restrictions By Wan, Alan T.K.; Zou, Guohua; Banerjee, Anurag
  7. Model Selection Uncertainty and Detection of Threshold Effects By Pitarakis, Jean-Yves
  8. On Deconvolution as a First Stage Nonparametric Estimator By Yingyao Hu; Geert Ridder
  9. Why Panel Data? By Cheng Hsiao
  10. Modeling the FIBOR/EURIBOR Swap Term Structure: An Empirical Approach By Oliver Blaskowitz; Helmut Herwartz; Gonzalo de Cadenas Santiago
  11. A Dynamic Semiparametric Factor Model for Implied Volatility String Dynamics By Matthias Fengler; Wolfgang Härdle; Enno Mammen
  12. Yxilon – a Modular Open-Source Statistical Programming Language By Sigbert Klinke; Uwe Ziegenhagen; Yuval Guri

  1. By: Guido W. Imbens; Whitney Newey; Geert Ridder
    Abstract: This paper develops a new efficient estimator for the average treatment effect, if selection for treatment is on observables. The new estimator is linear in the first-stage nonparametric estimator. This simplifies the derivation of the means squared error (MSE) of the estimator as a function of the number of basis functions that is used in the first stage nonparametric regression. We propose an estimator for the MSE and show that in large samples minimization of this estimator is equivalent to minimization of the population MSE.
    Keywords: Nonparametric Estimation, Imputation, Mean Squared Error, Order Selection
    JEL: C14 C20
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:scp:wpaper:05-34&r=ecm
  2. By: Stephen Hall; James Mitchell
    Abstract: In this paper we investigate whether and how far density forecasts sensibly can be combined to produce a "better" pooled density forecast. In so doing we bring together two important but hitherto largely unrelated areas of the forecasting literature in economics, density forecasting and forecast combination. We provide simple Bayesian methods of pooling information across alternative density forecasts. We illustrate the proposed techniques in an application to two widely used published density forecasts for U.K. inflation. We examine whether in practice improved density forecasts for inflation, one year ahead, might have been obtained if one had combined the Bank of England and NIESR density forecasts or "fan charts".
    Date: 2004–11
    URL: http://d.repec.org/n?u=RePEc:nsr:niesrd:249&r=ecm
  3. By: Stephen Hall; James Mitchell
    Abstract: This paper brings together two important but hitherto largely unrelated areas of the forecasting literature, density forecasting and forecast combination. It proposes a simple data-driven approach to direct combination of density forecasts using optimal weights.
    Date: 2004–11
    URL: http://d.repec.org/n?u=RePEc:nsr:niesrd:248&r=ecm
  4. By: Andres Gonzalez (Banco de la Republica de Colombia, Stockholm School of Economics); Timo Terasvirta (Department of Economic Statistics, Stockholm School of Economics); Dick van Dijk (Econometric Institute, Erasmus University Rotterdam)
    Abstract: We develop a non-dynamic panel smooth transition regression model with fixed individual effects. The model is useful for describing heterogenous panels, with regression coefficients that vary across individuals and over time. Heterogeneity is allowed for by assuming that these coefficients are continuous functions of an observable variable through a bounded function of this variable and fluctuate between a limited number (often two) of ?extreme regimes?. The model can be viewed as a generalization of the threshold panel model of Hansen (1999). We extend the modelling strategy for univariate smooth transition regression models to the panel context. This comprises of model specification based on homogeneity tests, parameter estimation, and diagnostic checking, including tests for parameter constancy and no remaining nonlinearity. The new model is applied to describe firms? investment decisions in the presence of capital market imperfections.
    Keywords: financial constraints; heterogenous panel; investment; misspecification test; nonlinear modelling panel data; smooth transition models
    JEL: C12 C23 C52 G31 G32
    Date: 2005–08–01
    URL: http://d.repec.org/n?u=RePEc:uts:rpaper:165&r=ecm
  5. By: Jörg Breitung; M. Hashem Pesaran
    Abstract: This paper provides a review of the literature on unit roots and cointegration in panels where the time dimension (T), and the cross section dimension (N) are relatively large. It distinguishes between the first generation tests developed on the assumption of the cross section independence, and the second generation tests that allow, in a variety of forms and degrees, the dependence that might prevail across the different units in the panel. In the analysis of cointegration the hypothesis testing and estimation problems are further complicated by the possibility of cross section cointegration which could arise if the unit roots in the different cross section units are due to common random walk components.
    Keywords: Panel Unit Roots, Panel Cointegration, Cross Section Dependence, Common Effects
    JEL: C12 C15 C22 C23
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:scp:wpaper:05-32&r=ecm
  6. By: Wan, Alan T.K.; Zou, Guohua; Banerjee, Anurag
    Abstract: It is well known that the Durbin-Watson and several other tests for first-order autocorrelation have limiting power of either zero or one in a linear regression model without an intercept, and tend to a constant lying strictly between these values when an intercept term is present. This paper considers the limiting power of these tests in models with restricted coefficients. Surprisingly, it is found that with linear restrictions on the coefficients, the limiting power can still drop to zero even with the inclusion of an intercept in the regression. It is also shown that for regressions with valid restrictions, these test statistics have algebraic forms equivalent to the corresponding statistics in the unrestricted model.
    Date: 2004–03–01
    URL: http://d.repec.org/n?u=RePEc:stn:sotoec:0405&r=ecm
  7. By: Pitarakis, Jean-Yves
    Abstract: Inferences about the presence or absence of threshold type nonlinearities in TAR models are conducted within models whose lag length has been estimated in a preliminary stage. Typically the null hypothesis of linearity is then tested against a threshold alternative on which the estimated lag length is imposed on each regime. In this paper we evaluate the properties of test statistics for detecting the presence of threshold effects in autoregressive models when this model uncertainty is taken into account. We show that this approach may lead to important distortions when the underlying model has truly threshold effects by establishing the limiting properties of the estimated lag length in the mispecified linear autoregressive fit and assessing the impact of this model uncertainty on the power of the tests. We subsequently propose a full model selection based approach designed to jointly detect the presence of threshold effects and optimally specify its dynamics and compare its performance with the traditional test based approach.
    Date: 2004–07–01
    URL: http://d.repec.org/n?u=RePEc:stn:sotoec:0409&r=ecm
  8. By: Yingyao Hu; Geert Ridder
    Abstract: We reconsider Taupin’s (2001) Integrated Nonlinear Regression (INLR) estimator for a nonlinear regression with a mismeasured covariate. We find that if we restrict the distribution of the measurement error to the class of range-restricted distributions, then weak smoothness assumptions suffice to ensure sqrt(n) consistency of the estimator. The restriction to such distributions is innocuous, because it does not affect the fit to the data. Our results show that deconvolution can be used in a nonparametric first step without imposing restrictive smoothness assumptions on the parametric model.
    Date: 2005–08
    URL: http://d.repec.org/n?u=RePEc:scp:wpaper:05-29&r=ecm
  9. By: Cheng Hsiao
    Abstract: We explain the proliferation of panel data studies in terms of (i) data availability, (ii) the more heightened capacity for modeling the complexity of human behavior than a single cross-section or time series data can possibly allow, and (iii) challenging methodology. Advantages and issues of panel data modeling are also discussed.
    Keywords: Panel data; Longitudinal data; Unobserved heterogeneity; Random effects; Fixed effects
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:scp:wpaper:05-33&r=ecm
  10. By: Oliver Blaskowitz; Helmut Herwartz; Gonzalo de Cadenas Santiago
    Abstract: In this study we forecast the term structure of FIBOR/EURIBOR swap rates by means of recursive vector autoregressive (VAR) models. In advance, a principal components analysis (PCA) is adopted to reduce the dimensionality of the term structure. To evaluate ex–ante forecasting performance for particular short, medium and long term rates and for the level, slope and curvature of the swap term structure, we rely on measures of both statistical and economic performance. Whereas the statistical performance is investigated by means of the Henrikkson–Merton statistic, the economic performance is assessed in terms of cash flows implied by alternative trading strategies. Arguing in favor of local homogeneity of term structure dynamics, we propose a data driven, adaptive model selection strategy to ’predict the best forecasting model’ out of a set of 100 alternative implementations of the PCA/VAR model. This approach is shown to outperform forecasting schemes relying on global homogeneity of the term structure.
    Keywords: Principal components, Factor Analysis, Ex–ante forecasting, EURIBOR swap rates, Term structure, Trading strategies
    JEL: C32 C53 E43 G29
    Date: 2005–04
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005-024&r=ecm
  11. By: Matthias Fengler; Wolfgang Härdle; Enno Mammen
    Abstract: A primary goal in modelling the implied volatility surface (IVS) for pricing and hedging aims at reducing complexity. For this purpose one fits the IVS each day and applies a principal component analysis using a functional norm. This approach, however, neglects the degenerated string structure of the implied volatility data and may result in a modelling bias. We propose a dynamic semiparametric factor model (DSFM), which approximates the IVS in a finite dimensional function space. The key feature is that we only fit in the local neighborhood of the design points. Our approach is a combination of methods from functional principal component analysis and backfitting techniques for additive models. The model is found to have an approximate 10% better performance than a sticky moneyness model. Finally, based on the DSFM, we devise a generalized vega-hedging strategy for exotic options that are priced in the local volatility framework. The generalized vega-hedging extends the usual approaches employed in the local volatility framework.
    Keywords: Smile, local volatility, generalized additive model, backfitting, functional principal component analysis
    JEL: C14 G12
    Date: 2005–03
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005-020&r=ecm
  12. By: Sigbert Klinke; Uwe Ziegenhagen; Yuval Guri
    Abstract: Statistical research has always been at the edge of available computing power. Huge datasets, e.g in DataMining or Quantitative Finance, and computationally intensive techniques, e.g. bootstrap methods, always require a little bit more computing power than is currently available. But the most popular statistical programming language R, as well as statistical programming languages like S or XploRe, are interpreted which makes them slow in computing intensive areas. The common solution is to implement these routines in low-level programming languages like C/C++ or Fortran and subsequently integrate them as dynamic linked libraries (DLL) or shared object libraries (SO) in the statistical programming language.
    Keywords: statistical programming language, XploRe, Yxilon, Java, dynamic linked libraries, shared object libraries
    JEL: C80
    Date: 2005–03
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005-018&r=ecm

This nep-ecm issue is ©2005 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.