nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒03‒31
fourteen papers chosen by
Sune Karlsson
Orebro University

  1. The Weak Instrument Problem of the System GMM Estimator in Dynamic Panel Data Models By Maurice J.G. Bun; Frank Windmeijer
  2. A robust partial least squares method with applications By Javier Gonzalez; Daniel Pena; Rosario Romera
  3. Realized Correlation Tick-by-Tick By Fulvio Corsi; Francesco Audrino
  4. The Unobserved Heterogeneity Distribution in Duration Analysis By Abbring, Jaap H; van den Berg, Gerard J
  5. Model comparison using the Hansen-Jagannathan distance By Raymond Kan; Cesare Robotti
  6. Linear cointegration of nonlinear time series with an application to interest rate dynamics By Barry E. Jones; Travis D. Nesmith
  7. Term Structure Forecasting: No-Arbitrage Restrictions vs Large Information Set By Favero, Carlo A; Niu, Linlin; Sala, Luca
  8. Multivariate Realized Stock Market Volatility<br> By Gregory H. Bauer; Keith Vorkink
  9. Estimation, Learning and Parameters of Interest in a Multiple Outcome Selection Model By Tobias, Justin
  10. Income Convergence: The Dickey-Fuller Test under the Simultaneous Presence of Stochastic and Deterministic Trends By Manuel Gomez; Daniel Ventosa-Santaularia
  11. Efficient Likelihood Analysis and Filtering for State-Space Representations By David N. DeJong; Hariharan Dharmarajan; Roman Liesenfeld; Jean-Francois Richard
  12. Identification of Peer Effects through Social Networks By Yann Bramoullé; Habiba Djebbari; Bernard Fortin
  13. Adjusting Imperfect Data: Overview and Case Studies By Lars Vilhuber
  14. Rational seasonality By Travis D. Nesmith

  1. By: Maurice J.G. Bun; Frank Windmeijer
    Abstract: The system GMM estimator for dynamic panel data models combines moment conditions for the model in first differences with moment conditions for the model in levels. It has been shown to improve on the GMM estimator in the first differenced model in terms of bias and root mean squared error. However, we show in this paper that in the covariance stationary panel data AR(1) model the expected values of the concentration parameters in the differenced and levels equations for the crosssection at time t are the same when the variances of the individual heterogeneity and idiosyncratic errors are the same. This indicates a weak instrument problem also for the equation in levels. We show that the 2SLS biases relative to that of the OLS biases are then similar for the equations in differences and levels, as are the size distortions of the Wald tests. These results are shown in a Monte Carlo study to extend to the panel data system GMM estimator.
    Keywords: Dynamic Panel Data, System GMM, Weak Instruments
    JEL: C12 C13 C23
    URL: http://d.repec.org/n?u=RePEc:bri:uobdis:06/595&r=ecm
  2. By: Javier Gonzalez; Daniel Pena; Rosario Romera
    Abstract: Partial least squares regression (PLS) is a linear regression technique developed to relate many regressors to one or several response variables. Robust methods are introduced to reduce or remove the effect of outlying data points. In this paper we show that if the sample covariance matrix is properly robustified further robustification of the linear regression steps of the PLS algorithm becomes unnecessary. The robust estimate of the covariance matrix is computed by searching for outliers in univariate projections of the data on a combination of random directions (Stahel-Donoho) and specific directions obtained by maximizing and minimizing the kurtosis coefficient of the projected data, as proposed by Peña and Prieto (2006). It is shown that this procedure is fast to apply and provides better results than other procedures proposed in the literature. Its performance is illustrated by Monte Carlo and by an example, where the algorithm is able to show features of the data which were undetected by previous methods.
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws071304&r=ecm
  3. By: Fulvio Corsi; Francesco Audrino
    Abstract: We propose the Heterogeneous Autoregressive (HAR) model for the estimation and prediction of realized correlations. We construct a realized correlation measure where both the volatilities and the covariances are computed from tick-by-tick data. As for the realized volatility, the presence of market microstructure can induce significant bias in standard realized covariance measure computed with artificially regularly spaced returns. Contrary to these standard approaches we analyse a simple and unbiased realized covariance estimator that does not resort to the construction of a regular grid, but directly and efficiently employs the raw tick-by-tick returns of the two series. Montecarlo simulations calibrated on realistic market microstructure conditions show that this simple tick-by-tick covariance possesses no bias and the smallest dispersion among the covariance estimators considered in the study. In an empirical analysis on S&P 500 and US bond data we find that realized correlations show significant regime changes in reaction to financial crises. Such regimes must be taken into account to get reliable estimates and forecasts.
    Keywords: High frequency data, Realized Correlation, Market Microstructure, Bias correction, HAR, Regimes
    JEL: C13 C22 C51 C53
    Date: 2007–01
    URL: http://d.repec.org/n?u=RePEc:usg:dp2007:2007-02&r=ecm
  4. By: Abbring, Jaap H; van den Berg, Gerard J
    Abstract: In a large class of hazard models with proportional unobserved heterogeneity, the distribution of the heterogeneity among survivors converges to a gamma distribution. This convergence is often rapid. We derive this result as a general result for exponential mixtures and explore its implications for the specification and empirical analysis of univariate and multivariate duration models.
    Keywords: duration analysis; exponential mixture; gamma distribution; limit distribution; mixed proportional hazard
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:6219&r=ecm
  5. By: Raymond Kan; Cesare Robotti
    Abstract: Although it is of interest to empirical researchers to test whether or not a particular assetpricing model is true, a more useful task is to determine how wrong a model is and to compare the performance of competing asset-pricing models. In this paper, we propose a new methodology to test whether two competing linear asset-pricing models have the same Hansen-Jagannathan distance. We show that the asymptotic distribution of the test statistic depends on whether the competing models are correctly specified or misspecified and are nested or nonnested. In addition, given the increasing interest in misspecified models, we propose a simple methodology for computing the standard errors of the estimated stochastic discount factor parameters that are robust to model misspecification. Using the same data as in Hodrick and Zhang (2001), we show that the commonly used returns and factors are, for the most part, too noisy to conclude that one model is superior to the other models in terms of Hansen-Jagannathan distance. In addition, we show that many of the macroeconomic factors commonly used in the literature are no longer priced once potential model misspecification is taken into account.
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:fip:fedawp:2007-04&r=ecm
  6. By: Barry E. Jones; Travis D. Nesmith
    Abstract: We derive a definition of linear cointegration for nonlinear stochastic processes using a martingale representation theorem. The result shows that stationary linear cointegrations can exhibit nonlinear dynamics, in contrast with the normal assumption of linearity. We propose a sequential nonparametric method to test first for cointegration and second for nonlinear dynamics in the cointegrated system. We apply this method to weekly US interest rates constructed using a multirate filter rather than averaging. The Treasury Bill, Commerical Paper and Federal Funds rates are cointegrated, with two cointegrating vectors. Both cointegrations behave nonlinearly. Consequently, linear models will not fully relicate the dynanics of monetary policy transmission.
    Keywords: Time-series analysis ; Cointegration ; Interest rates
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2007-03&r=ecm
  7. By: Favero, Carlo A; Niu, Linlin; Sala, Luca
    Abstract: This paper addresses the issue of forecasting the term structure. We provide a unified state-space modelling framework that encompasses different existing discrete-time yield curve models. Within such framework we analyze the impact on forecasting performance of two crucial modelling choices, i.e. the imposition of no-arbitrage restrictions and the size of the information set used to extract factors. Using US yield curve data, we find that: a. macro factors are very useful in forecasting at medium/long forecasting horizon; b. financial factors are useful in short run forecasting; c. no-arbitrage models are effective in shrinking the dimensionality of the parameter space and, when supplemented with additional macro information, are very effective in forecasting; d. within no-arbitrage models, assuming time-varying risk price is more favourable than assuming constant risk price for medium horizon-maturity forecast when yield factors dominate the information set, and for short horizon and long maturity forecast when macro factors dominate the information set; e. however, given the complexity and the highly non-linear parameterization of no-arbitrage models, it is very difficult to exploit within this type of models the additional information offered by large macroeconomic datasets.
    Keywords: factor models; forecasting; large data set; term structure of interest rates; Yield curve
    JEL: C33 C53 E43 E44
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:6206&r=ecm
  8. By: Gregory H. Bauer; Keith Vorkink
    Abstract: We present a new matrix-logarithm model of the realized covariance matrix of stock returns. The model uses latent factors which are functions of both lagged volatility and returns. The model has several advantages: it is parsimonious; it does not require imposing parameter restrictions; and, it results in a positive-definite covariance matrix. We apply the model to the covariance matrix of size-sorted stock returns and find that two factors are sufficient to capture most of the dynamics. We also introduce a new method to track an index using our model of the realized volatility covariance matrix.
    Keywords: Econometric and statistical methods; Financial markets
    JEL: G14 C53 C32
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:07-20&r=ecm
  9. By: Tobias, Justin
    Abstract: We describe estimation, learning and prediction in a treatment-response model with two outcomes. The introduction of potential outcomes in this model introduces four cross-regime correlation parameters that are not contained in the likelihood for the observed data and thus are not identified. Despite this inescapable identification problem, we build upon the results of Koop and Poirier (1997) to describe how learning takes place about the four non-identified correlations through the imposed positive definiteness of the covariance matrix. We then derive bivariate distributions associated with commonly estimated ``treatment parameters'' (including the Average Treatment Effect and effect of Treatment on the Treated), and use the learning that takes place about the non-identified correlations to calculate these densities. We illustrate our points in several generated data experiments and apply our methods to estimate the joint impact of child labor on achievement scores in language and mathematics.
    Keywords: Bayesian econometrics, Treatment effects
    Date: 2005–11–30
    URL: http://d.repec.org/n?u=RePEc:isu:genres:12480&r=ecm
  10. By: Manuel Gomez (School of Economics, Universidad de Guanajuato); Daniel Ventosa-Santaularia (School of Economics, Universidad de Guanajuato)
    Abstract: We investigate the efficiency of the Dickey-Fuller (DF) test as a tool to examine the convergence hypothesis. In doing so, we first describe two possible outcomes, overlooked in previous studies, namely Loose Catching-up and Loose Lagging-behind. Results suggest that this test is useful when the intention is to discriminate between a unit root process and a trend stationary process, though unreliable when used to differentiate between a unit root process and a process with both deterministic and stochastic trends. This issue may explain the lack of support for the convergence hypothesis in the aforementioned literature.
    Keywords: Divergence, Loose Catching-up/Lagging-behind, Convergence, Deterministic and Stochastic trends
    JEL: C32 O40
    URL: http://d.repec.org/n?u=RePEc:gua:wpaper:em200703&r=ecm
  11. By: David N. DeJong; Hariharan Dharmarajan; Roman Liesenfeld; Jean-Francois Richard
    Abstract: . . .
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:pit:wpaper:300&r=ecm
  12. By: Yann Bramoullé (CIRPÉE, Université Laval); Habiba Djebbari (CIRPÉE, Université Laval and IZA); Bernard Fortin (CIRPÉE, Université Laval)
    Abstract: We provide new results regarding the identification of peer effects. We consider an extended version of the linear-in-means model where each individual has his own specific reference group. Interactions are thus structured through a social network. We assume that correlated unobservables are either absent, or treated as fixed effects at the component level. In both cases, we provide easy-to-check necessary and sufficient conditions for identification. We show that endogenous and exogenous effects are generally identified under network interaction, although identification may fail for some particular structures. Monte Carlo simulations provide an analysis of the effects of some crucial characteristics of a network (i.e., density, intransitivity) on the estimates of social effects. Our approach generalizes a number of previous results due to Manski (1993), Moffitt (2001), and Lee (2006).
    Keywords: peer effects, social networks, identification
    JEL: D85 L14 Z13 C3
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp2652&r=ecm
  13. By: Lars Vilhuber
    Abstract: Research users of large administrative have to adjust their data for quirks, problems, and issues that are inevitable when working with these kinds of datasets. Not all solutions to these problems are identical, and how they differ may affect how the data is to be interpreted. Some elements of the data, such as the unit of observation, remain fundamentally different, and it is important to keep that in mind when comparing data across countries. In this paper (written for Lazear and Shaw, 2007), we focus on the differences in the underlying data for a selection of country datasets. We describe two data elements that remain fundamentally different across countries -- the sampling or data collection methodology, and the basic unit of analysis (establishment or firm) -- and the extent to which they differ. We then proceed to document some of the problems that affect longitudinally linked administrative data in general, and we describe some of the solutions analysts and statistical agencies have implemented, and explore, through a select set of case studies, how each adjustment or absence thereof might affect the data.
    JEL: C81 C82 J0
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:12977&r=ecm
  14. By: Travis D. Nesmith
    Abstract: Seasonal adjustment usually relies on statistical models of seasonality that treat seasonal fluctuations as noise corrupting the `true' data. But seasonality in economic series often stems from economic behavior such as Christmas-time spending. Such economic seasonality invalidates the separability assumptions that justify the construction of aggregate economic indexes. To solve this problem, Diewert(1980,1983,1998,1999) incorporates seasonal behavior into aggregation theory. Using duality theory, I extend these results to a larger class of decision problems. I also relax Diewert's assumption of homotheticity. I provide support for Diewert's preferred seasonally-adjusted economic index using weak separability assumptions that are shown to be sufficient.
    Keywords: Seasonal variations (Economics) ; Consumer behavior
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2007-04&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.