nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒07‒17
eleven papers chosen by
Sune Karlsson
Orebro University

  1. Structural change tests based on implied probabilitie for GEL criteria By Alain Guay; jean-Francois Lamarche
  2. A Sequential Procedure to Determine the Number of Breaks in Trend with an Integrated or Stationary Noise Component By Mohitosh Kejriwal; Pierre Perron
  3. Estimation of Nonlinear Models with Mismeasured Regressors Using Marginal Information By Yingyao Hu and Geert Ridder
  4. Extreme Value Theory Filtering Techniques for Outlier Detection By Jose Olmo
  5. Sensitivity analysis and density estimation for finite-time ruin probabilities By Stéphane Loisel; Nicolas Privault
  6. Robust Estimation of Multiple Regression Model with Non-normal Error: Symmetric Distribution By Wing-Keung Wong; Guorui Bian
  7. Confidence intervals for long-horizon predictive regressions via reverse regressions By Min Wei; Jonathan Wright
  8. Are disaggregate data useful for factor analysis in forecasting French GDP? By Barhoumi, K.; Darné, O.; Ferrara, L.
  9. Testing for a break in persistence under long-range dependencies and mean shifts By Sibbertsen, Philipp; Willert, Juliane
  10. Sums and Extreme Values of Random Variables: Duality Properties By Ralph W. Bailey
  11. No-arbitrage Near-Cointegrated VAR(p) Term Structure Models, Term Premia and GDP Growth. By Jardet, C.; Monfort, A.; Pegoraro, F.

  1. By: Alain Guay (Department of Economics, Universite du Quebec a Montreal); jean-Francois Lamarche (Department of Economics, Brock University)
    Abstract: This paper proposes Pearson-type statistics based on implied probabilities to detect structural change. The class of generalized empirical likelihood estimators (see Smith (1997)) assigns a set of probabilities to each observation such that moment conditions are satisfied. These probabilities are called implied probabilities. The proposed test statistics for structural change are based on the information content in these implied probabilities. We consider cases of structural change with unknown breakpoint which can occur in the parameters of interest or in the overidentifying restrictions used to estimate these parameters. We also propose a structural change test based on implied probabilities that is robust to weak identification or cases in which parameters are completely unidentified. The test statistics considered here have good size and competitive power properties. Moreover, they are computed in a single step which eliminates the need to compute the weighting matrix required for GMM estimation.
    Keywords: Generalized empirical likelihood, generalized method of moments, parameter instability, structural change
    JEL: C12 C32
    Date: 2009–05
    URL: http://d.repec.org/n?u=RePEc:brk:wpaper:0904&r=ecm
  2. By: Mohitosh Kejriwal (Krannert School of Management, Purdue University); Pierre Perron (Economics Department, Boston University)
    Abstract: Perron and Yabu (2008) consider the problem of testing for a break occuring at an unknown date in the trend function of a univariate time series when the noise component can be either stationary or integrated. This paper extends their work by proposing a sequential test that allows one to test the null hypothesis of, say, l breaks, versus the alternative hypothesis of (l+1) breaks. The test enables consistent estimation of the number of breaks. In both stationary and integrated cases, it is shown that asymptotic critical values can be obtained from the relevant quantiles of the limit distribution of the test for a single break. Monte Carlo simulations suggest that the procedure works well in finite samples.
    Keywords: Structural Change, Sequential Procedure, Feasible GLS, Unit Root, Structural Breaks
    JEL: C22
    Date: 2009–02
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2009-005&r=ecm
  3. By: Yingyao Hu and Geert Ridder
    Abstract: We consider the estimation of nonlinear models with mismeasured explanatory variables, when information on the marginal distribution of the true values of these variables is available. We derive a semi-parametric MLE that is shown to be $\sqrt{n}$ consistent and asymptotically normally distributed. In a simulation experiment we find that the finite sample distribution of the estimator is close to the asymptotic approximation. The semi-parametric MLE is applied to a duration model for AFDC welfare spells with misreported welfare benefits. The marginal distribution of the correctly measured welfare benefits is obtained from an administrative source.
    Date: 2009–06
    URL: http://d.repec.org/n?u=RePEc:jhu:papers:554&r=ecm
  4. By: Jose Olmo (Department of Economics, City University, London)
    Abstract: We introduce asymptotic parameter-free hypothesis tests based on extreme value theory to detect outlying observations infinite samples. Our tests have nontrivial power for detecting outliers for general forms of the parent distribution and can be implemented when this is unknown and needs to be estimated. Using these techniques this article also develops an algorithm to uncover outliers masked by the presence of influential observations.
    Keywords: Extreme value theory, Hypothesis tests, Outlier detection, Power function, Robust estimation.
    JEL: C14 C22 C32 C50
    Date: 2009–07
    URL: http://d.repec.org/n?u=RePEc:cty:dpaper:0909&r=ecm
  5. By: Stéphane Loisel (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429); Nicolas Privault (Department of Mathematics - City University of Hong Kong)
    Abstract: The goal of this paper is to obtain probabilistic representation formulas that are suitable for the numerical computation of the (possibly non-continuous) density functions of infima of reserve processes commonly used in insurance. In particular we show, using Monte Carlo simulations, that these representation formulas perform better than standard finite difference methods. Our approach differs from standard Malliavin probabilistic representation techniques which generally require more smoothness on random variables, entailing the continuity of their density functions.
    Keywords: Ruin probability; Malliavin calculus; insurance; integration by parts
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-00201347_v3&r=ecm
  6. By: Wing-Keung Wong; Guorui Bian
    Abstract: In this paper, we develop the modified maximum likelihood (MML) estimators for the multiple regression coefficients in linear model with the underlying distribution assumed to be symmetric, one of Student's t family. We obtain the closed form of the estimators and derive their asymptotic properties. In addition, we demonstrate that the MML estimators are more appropriate to estimate the parameters in the Capital Asset Pricing Model by comparing its performance with that of least squares estimators (LSE) on the monthly returns of US portfolios. Our empirical study reveals that the MML estimators are more efficient than the LSE in terms of relative efficiency of one-step-ahead forecast mean square error for small samples.
    Keywords: Maximum likelihood estimators, Modified maximum likelihood estimators, Student’s t family, Capital Asset Pricing Model, Robustness.
    JEL: C1 C2 G1
    Date: 2009–05
    URL: http://d.repec.org/n?u=RePEc:mos:moswps:2005-09&r=ecm
  7. By: Min Wei; Jonathan Wright
    Abstract: Long-horizon predictive regressions in finance pose formidable econometric problems when estimated using the sample sizes that are typically available. A remedy that has been proposed by Hodrick (1992) is to run a reverse regression in which short-horizon returns are projected onto a long-run mean of some predictor. By covariance stationarity, the slope coefficient is zero in the reverse regression if and only if it is zero in the original regression, but testing the hypothesis in the reverse regression avoids small sample problems. Unfortunately this only allows us to test the null of no predictability. In this paper we show how to use the reverse regression to test other hypotheses about the slope coefficient in a long-horizon predictive regression, and to form confidence intervals for this coefficient. We show that this approach to inference works well in small samples.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2009-27&r=ecm
  8. By: Barhoumi, K.; Darné, O.; Ferrara, L.
    Abstract: This paper compares the GDP forecasting performance of alternative factor models based on monthly time series for the French economy. These models are based on static and dynamic principal components. The dynamic principal components are obtained using time and frequency domain methods. The forecasting accuracy is evaluated in two ways for GDP growth. First, we question whether it is more appropriate to use aggregate or disaggregate data (with three disaggregating levels) to extract the factors. Second, we focus on the determination of the number of factors obtained either from various criteria or from a fixed choice.
    Keywords: GDP forecasting ; Factor models ; Data aggregation.
    JEL: C13 C52 C53 F47
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:bfr:banfra:232&r=ecm
  9. By: Sibbertsen, Philipp; Willert, Juliane
    Abstract: We show that the CUSUM-squared based test for a change in persistence by Leybourne et al. (2007) is not robust against shifts in the mean. A mean shift leads to serious size distortions. Therefore, adjusted critical values are needed when it is known that the data generating process has a mean shift. These are given for the case of one mean break. Response curves for the critical values are derived and a Monte Carlo study showing the size and power properties under this general de-trending is given
    Keywords: Break in persistence, long memory, structural break, level shift
    JEL: C12 C22
    Date: 2009–07
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-422&r=ecm
  10. By: Ralph W. Bailey
    Abstract: The inversion theorem for radially-distributed complex random variables provides a completely symmetric relationship between their characteristic functions and their distribution functions, suitably defi- ?ned. If the characteristic function happens also to be a distribution function, then a dual pair of random variables is de?fined. The distrib- ution function of each is the characteristic function of the other. If we call any distribution possessing a dual partner 'invertible', then both the radial normal and radial t distributions are invertible. Moreover the product of an invertible variable (for instance, a radial normal variable) with any other independent variable is invertible. Though the most prominent examples of invertible variables possess a normal divisor, we exhibit a pair of variables neither of which has a normal di- visor. A test for normal-divisibility, based on complete monotonicity, is provided. The sum of independent invertible variables is invertible; the inverse is the smallest in magnitude of the inverse variables. The- orems about sums of invertible random variables (for instance, central limit theorems) have a dual interpretation as theorems about extrema, and vice versa.
    Keywords: Bernstein's theorem; Bessel transform; duality; extreme value theorem; radial distribution; t-distribution
    JEL: C16
    Date: 2009–06
    URL: http://d.repec.org/n?u=RePEc:bir:birmec:09-05&r=ecm
  11. By: Jardet, C.; Monfort, A.; Pegoraro, F.
    Abstract: Macroeconomic questions involving interest rates generally require a reliable joint dynamics of a large set of variables. More precisely, such a dynamic modelling must satisfy two important conditions. First, it must be able to propose reliable predictions of some key variables. Second, it must be able to propose a joint dynamics of some macroeconomic variables, of the whole curve of interest rates, of the whole set of term premia and, possibly, of various decompositions of the term premia. The first condition is required if we want to disentangle the respective impacts of, for instance, the expectation part of the term premium of a given long-term interest rate on some macroeconomic variable. The second condition is necessary if we want to analyze the interactions between macro-variables with some global features of the yield curve (short part, long part, level, slope and curvature) or with, for instance, term premia of various maturities. In the present paper we propose to satisfy both requirements by using a Near-Cointegrated modelling of basic observables variables, in order to meet the first condition, and the no-arbitrage theory, in order to meet the second one. Moreover, the dynamic interactions of this large set of variables is based on the statistical notion of New Information Response Function, recently introduced by Jardet, Monfort and Pegoraro (2009). This technical toolkit is then used to propose a new approach to two important issues: the "conundrum" episode and the puzzle of the relationship between the term premia on long-term yields and future economic activity.
    Keywords: Near-Cointegrated VAR(p) model ; Term structure of interest rates ; Term premia ; GDP growth ; No-arbitrage affine term structure model ; New Information Response Function.
    JEL: C51 E43 E44 E47 G12
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:bfr:banfra:234&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.