nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒09‒11
eleven papers chosen by
Sune Karlsson
Orebro University

  1. Testing for a Constant Mean Function using Functional Regression By Jin Seo Cho; Meng Huang; Halbert White
  2. Nested forecast model comparisons: a new approach to testing equal accuracy By Todd E. Clark; Michael W. McCracken
  3. In-sample tests of predictive ability: a new approach By Todd E. Clark; Michael W. McCracken
  4. Testing for Unobserved Heterogeneity in Exponential and Weibull Duration Models By Jin Seo Cho; Halbert White
  5. Infinite Density at the Median and the Typical Shape of Stock Return Distributions By Chirok Han; Jin Seo Cho; Peter C. B. Phillips
  6. Simultaneous Confidence Bands for Penalized Spline Estimators By Tatyana Krivobokova; Thomas Kneib; Gerda Claeskens
  7. Consistent and Asymptotically Unbiased MinP Tests of Multiple Inequality Moment Restrictions By Christopher J. Bennett
  8. Sufficient reduction in multivariate surveillance By Frisén, Marianne; Andersson, Eva; Schiöler, Linus
  9. Returms-to-scale Properties in DEA Models: The Fundamental Role of Interior Points By Krivonozhko, Vladimir; Førsund, Finn R.
  10. A Spatial and Temporal Autoregressive Local Estimation for the Paris Housing Market By Nappi-Choulet, Ingrid; Maury, Tristan-Pierre
  11. Time to reject the privileging of economic theory over empirical evidence? A Reply to Lawson (2009) By Katarina Juselius

  1. By: Jin Seo Cho (Department of Economics, Korea University, Seoul, South Korea); Meng Huang (Department of Economics, University of California, San Diego, U.S.A.); Halbert White (Department of Economics, University of California, San Diego, U.S.A.)
    Abstract: In this paper, we study functional regression and its properties in testing the hypothesis of a constant zero mean function or an unknown constant non-zero mean function. As we show, the associated Wald test statistics have standard chi-square limiting null distributions, standard non-central chi-square distributions for local alternatives converging to zero at root-n rate, and are consistent against global alternatives. These properties permit computationally convenient tests for hypotheses involving nuisance parameters. In particular, we develop new alternatives to tests for mixture distributions and for regression misspecification, both of which involve nuisance parameters identified only under the alternative. In Monte Carlo studies, we find that our tests have well behaved levels. We find that the new procedures may sacrifice only exploit the covariance structure of the Gaussian processes underlying our statistics. Further, functional regression tests can have power better than existing methods that do not exploit this covariance structure, like the specification testing procedures of Bierens (1982, 1990) or Stinchcombe and White (1998).
    Keywords: Davies Test; Functional Data; Hypothesis Testing; Integrated Conditional Moment Test; Misspecification; Mixture Distributions; Nuissance Parameters; Wald Test
    JEL: C11 C12 C80
    Date: 2009
  2. By: Todd E. Clark; Michael W. McCracken
    Abstract: This paper develops bootstrap methods for testing whether, in a finite sample, competing out-of-sample forecasts from nested models are equally accurate. Most prior work on forecast tests for nested models has focused on a null hypothesis of equal accuracy in population basically, whether coefficients on the extra variables in the larger, nesting model are zero. We instead use an asymptotic approximation that treats the coefficients as non-zero but small, such that, in a finite sample, forecasts from the small model are expected to be as accurate as forecasts from the large model. Under that approximation, we derive the limiting distributions of pairwise tests of equal mean square error, and develop bootstrap methods for estimating critical values. Monte Carlo experiments show that our proposed procedures have good size and power properties for the null of equal finite-sample forecast accuracy. We illustrate the use of the procedures with applications to forecasting stock returns and inflation.
    Date: 2009
  3. By: Todd E. Clark; Michael W. McCracken
    Abstract: This paper presents analytical, Monte Carlo, and empirical evidence linking in-sample tests of predictive content and out-of-sample forecast accuracy. Our approach focuses on the negative effect that finite-sample estimation error has on forecast accuracy despite the presence of significant population-level predictive content. Specifically, we derive simple-to-use in-sample tests that test not only whether a particular variable has predictive content but also whether this content is estimated precisely enough to improve forecast accuracy. Our tests are asymptotically non-central chi-square or non-central normal. We provide a convenient bootstrap method for computing the relevant critical values. In the Monte Carlo and empirical analysis, we compare the effectiveness of our testing procedure with more common testing procedures.
    Date: 2009
  4. By: Jin Seo Cho (Department of Economics, Korea University, Seoul, South Korea); Halbert White (Department of Economics, University of California, San Diego, U.S.A.)
    Abstract: We examine use of the likelihood ratio (LR) statistics to test for unobserved heterogeneity in duration models, based on mixtures of exponential or Weibull distributions. We consider both the uncensored and censored duration cases. The asymptotic null distribution of the LR test statistics is not the standard chi-square, as the standard regularity conditions do not hold. Instead, there is a nuisance parameter identified only under the alternative, and a null parameter value on the boundary of parameter space, as in Cho and White (2007a). We accommodate these and provide methods delivering consistent asymptotic critical values. We conduct a number of Monte Carlo simulations, comparing the level and power of the LR test statistics to an information matrix (IM) text due to Chesher (1984) and Lagrange multiplier (LM) tests of Kiefer (1985) and Sharma (1987). Our simulations show that the LR test statistic generally outperforms the IM and LM tests. We aslo revisit the work of van den Berg and Ridder (1998) on unemployment durations and of Ghysels, Gourieroux, and Jasiak (2004)on interarrival times between stock trades, and, as it turns out, affirm their original informal inferences.
    Keywords: Unobserved Heterogeneity, Mixture Models, Likelihood Ratio Test, Search Theory, Interarrival Times
    JEL: C12 C22 C24 C41 C80 J22 J64
    Date: 2009
  5. By: Chirok Han (Korea University); Jin Seo Cho (Korea University); Peter C. B. Phillips (Yale University, University of York, University of Auckland & Singapore Management University)
    Abstract: Statistics are developed to test for the presence of an asymptotic discontinuity (or infinite density or peakedness) in a probability density at the median. The approach makes use of work by Knight (1998) on L1 estimation asymptotics in conjunction with non-parametric kernel density estimation methods. The size and power of the tests are assessed, and conditions under which the tests have good performance are explored in simulations. The new methods are applied to stock returns of leading companies across major U.S. industry groups. The results confirm the presence of infinite density at the median as a new significant empirical evidence for stock return distributions.
    Keywords: Asymptotic leptokurtosis, Infinite density at the median, Least absolute deviations, Kernel density estimation, Stock returns, Stylized facts
    JEL: C12 G11
    Date: 2009
  6. By: Tatyana Krivobokova (Georg-August-Universität Göttingen); Thomas Kneib (Ludwig-Maximilians-Universität München); Gerda Claeskens (Katholieke Universiteit Leuven)
    Abstract: In this paper we construct simultaneous confidence bands for a smooth curve using penalized spline estimators. We consider three types of estimation methods: (i) as a standard (fixed effect) nonparametric model, (ii) using the mixed model framework with the spline coefficients as random effects and (iii) a Bayesian approach. The volume-of-tube formula is applied for the first two methods and compared from a frequentist perspective to Bayesian simultaneous confidence bands. It is shown that the mixed model formulation of penalized splines can help to obtain, at least approximately, confidence bands with either Bayesian or frequentist properties. Simulations and data analysis support the methods proposed. The R package ConfBands accompanies the paper.
    Keywords: Bayesian penalized splines; B-splines; Confidence band; Mixed model; Penalization
    Date: 2009–09–01
  7. By: Christopher J. Bennett (Department of Economics, Vanderbilt University)
    Abstract: This paper considers the general problem of testing multiple inequality moment restrictions against an unrestricted alternative. We first introduce a test based on a maximum statistic and show how, via a partially recentered bootstrap scheme, we may obtain a testing procedure that delivers, at least asymptotically, an exact alpha-level test for any configuration of the parameters on the boundary of the null hypothesis. We prove that this bootstrap test is asymptotically unbiased and that it weakly dominates analogous testing procedures based on the canonical (fully centered) bootstrap. Building on these results we introduce a computationally inexpensive minimum p-value test. The minimum p-value test enjoys the asymptotic unbiasedness property of the underlying partially recentered bootstrap test. Additionally, the minimum $p$-value test delivers balance of power among the individual moment inequalities under test without studentization, and also allows users to gauge the strength of the evidence against the individual moment inequalities. To illustrate the use of our proposed testing procedure we examine the distributional effects of Vietnam veteran status on earnings. In particular, the results from our procedure when applied to testing for stochastic dominance and normalized stochastic dominance demonstrate that there is unambiguously greater poverty and greater relative inequality in earnings for veterans.
    Keywords: Bootstrap, moment inequalities, asymptotic bias, stochastic dominance
    JEL: C12 C14 I32
    Date: 2009–07
  8. By: Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University); Andersson, Eva (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University); Schiöler, Linus (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: The relation between change points in multivariate surveillance is important but seldom considered. The sufficiency principle is here used to clarify the structure of some problems, to find efficient methods, and to determine appropriate evaluation metrics. We study processes where the changes occur simultaneously or with known time lags. The surveillance of spatial data is one example where known time lags can be of interest. A general version of a theorem for the sufficient reduction of processes that change with known time lags is given. A simulation study illustrates the benefits or the methods based on the sufficient statistics.
    Keywords: change-points; exponential family; MEWMA; monitoring; inference principles
    JEL: C10
    Date: 2009–08–31
  9. By: Krivonozhko, Vladimir (Institute for Systems Analysis, Academy of Sciences, Moscow); Førsund, Finn R. (Dept. of Economics, University of Oslo)
    Abstract: Attempts can be found in the DEA literature to identify returns to scale at efficient interior points of the production possibility set on the basis of returns to scale at points of the corresponding reference sets. However, an opposite approach is put forward in this paper, advocating that returns-to-scale properties of efficient reference units should be found by identifying first returns to scale of an efficient interior unit that is a radial projection to the frontier of an inefficient unit. Returns-to-scale properties of both the corresponding reference units and units supporting the face in question can then be established.
    Keywords: Returns to scale; DEA; Interior points; Vertices
    JEL: C61 D20
    Date: 2009–08–04
  10. By: Nappi-Choulet, Ingrid (ESSEC Business School); Maury, Tristan-Pierre (EDHEC Business School)
    Abstract: This original study examines the potential of a spatiotemporal autoregressive Local (LSTAR) approach in modelling transaction prices for the housing market in inner Paris. We use a data set from the Paris Region notary office (“Chambre des notaires d’Île-de-France”) which consists of approximately 250,000 transactions units between the first quarter of 1990 and the end of 2005. We use the exact X -- Y coordinates and transaction date to spatially and temporally sort each transaction. We first choose to use the spatiotemporal autoregressive (STAR) approach proposed by Pace, Barry, Clapp and Rodriguez (1998). This method incorporates a spatiotemporal filtering process into the conventional hedonic function and attempts to correct for spatial and temporal correlative effects. We find significant estimates of spatial dependence effects. Moreover, using an original methodology, we find evidence of a strong presence of both spatial and temporal heterogeneity in the model. It suggests that spatial and temporal drifts in households socio-economic profiles and local housing market structure effects are certainly major determinants of the price level for the Paris Housing Market.
    Keywords: Hedonic Prices; Heterogeneity; Paris Housing Market; STAR Model
    JEL: C51 R33
    Date: 2009–07
  11. By: Katarina Juselius (Department of Economics, University of Copenhagen)
    Abstract: The present financial and economic crisis has revealed a systemic failure of academic economics and emphasized the need to re-think how to model economic phenomena. Lawson (2009) seems concerned that critics of standard models now will fill academic journals with contributions that make the same methodological mistakes, albeit in slightly different guise. In particular, he is rather sceptical to use of mathematical statistical models, such as the CVAR approach, as a way of learning about economic mechanisms. In this paper I discuss whether this is a relevant claim and argue that it is likely to be based on a misunderstanding of what a proper statistical analysis is and can offer. In particular, I argue that the strong evidence of (near) unit roots and (structural) breaks in economic variables suggests that standard economic models need to be modified or changed to incorporate these strong features of the data. Furthermore, I argue that a strong empirical methodology that allows data to speak freely about economic mechanisms, such as the CVAR, would ensure that important information in the data is not over heard when needed. Adequately applied such models would provide us with an early warnings system signalling that the economy is moving seriously out of equilibrium.
    Keywords: economic crisis; Dahlem report; CVAR approach; Theory-first; Reality-first; Imperfect Knowledge Expectations; non-stationary data
    JEL: A1 B4 C3 C5 E0 E1 E2 E6
    Date: 2009–08

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.