nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒01‒02
twelve papers chosen by
Sune Karlsson
Orebro University

  1. Finite-Sample Stability of the KPSS Test By Jönsson , Kristian
  2. A general method for constructing a test of multivariate normality By Desai Tejas A.
  3. A Simple Benchmark for Forecasts of Growth and Inflation By Marcellino, Massimiliano
  4. A Two-step estimator for large approximate dynamic factor models based on Kalman filtering By Catherine Doz; Domenico Giannone; Lucrezia Reichlin
  5. Second Order Approximation for the Average Marginal Effect of Heckman's Two Step Procedure By Akay, Alpaslan; Tsakas, Elias
  6. Asymptotically distribution free (adf) interval estimation of coefficient alpha By ALBERTO MAYDEU
  7. DSGE Models in a Data-Rich Environment By Jean Boivin; Marc Giannoni
  8. Corrections to classical procedures for estimating thurstone´s case v model for ranking data By ALBERTO MAYDEU
  9. Using Ramdomization in Development Economics Research: A Toolkit By Esther Duflo; Rachel Glennerster; Michael Kremer
  10. Modelling Term-Structure Dynamics for Risk Management: A Practitioner's Perspective By David Jamieson Bolder
  11. Model selection for monetary policy analysis – Importance of empirical validity By Q. Farooq Akram; Ragnar Nymoen
  12. How professional forecasters view shocks to GDP By Spencer D. Krane

  1. By: Jönsson , Kristian (Department of Economics, Lund University)
    Abstract: In the current paper, the finite-sample stability of various implementations of the KPSS test is studied. The implementations considered differ in how the so-called long-run variance is estimated under the null hypothesis. More specifically, the effects that the choice of kernel, the value of the bandwidth parameter and the application of a prewhitening filter have on the KPSS test are investigated. It is found that the finite-sample distribution KPSS test statistic can be very unstable when the Quadratic Spectral kernel is used and/or a prewhitening filter is applied. The instability manifests itself through making the small-sample distribution of the test statistic sensitive to the specific process that generates the data under the null hypothesis. This in turn implies that the size of the test can be hard to control. For the cases investigated in the current paper, it turns out that using the Bartlett kernel in the long-run variance estimation renders the most stable test. By supplying an empirical application, we illustrate the adverse effects that can occur when care is not taken in choosing what test implementation to employ when testing for stationarity in small-sample situations.
    Keywords: Stationarity; Unit root; KPSS test; Size distortion; Long-run variance; Monte Carlo simulation; Private consumption; Permanent Income Hypothesis
    JEL: C12 C13 C14 C15 C22 E21
    Date: 2006–12–14
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2006_023&r=ecm
  2. By: Desai Tejas A.
    Abstract: We present a general method of constructing a test of multivariate normality using any given test of univariate normality of complete or randomly incomplete data. A simulation study considers multivariate tests constructed using the univariate versions of the Shapiro-Wilk, Kolmogorov-Smirnov, Cramer-Von-Mises, and Anderson-Darling tests.
    Keywords: Anderson-Darling test; Cramer-Von-Mises test; Kolmogorov-Smirnov test; Missingness at Random (MAR); Multivariate normality; Shapiro-Wilk test
    Date: 2006–12–22
    URL: http://d.repec.org/n?u=RePEc:iim:iimawp:2006-12-04&r=ecm
  3. By: Marcellino, Massimiliano
    Abstract: A theoretical model for growth or inflation should be able to reproduce the empirical features of these variables better than competing alternatives. Therefore, it is common practice in the literature, whenever a new model is suggested, to compare its performance with that of a benchmark model. However, while the theoretical models become more and more sophisticated, the benchmark typically remains a simple linear time series model. Recent examples are provided, e.g., by articles in the real business cycle literature or by new-keynesian studies on inflation persistence. While a time series model can provide a reasonable benchmark to evaluate the value added of economic theory relative to the pure explanatory power of the past behavior of the variable, recent developments in time series analysis suggest that more sophisticated time series models could provide more serious benchmarks for economic models. In this paper we evaluate whether these complicated time series models can really outperform standard linear models for GDP growth and inflation, and should therefore substitute them as benchmarks for economic theory based models. Since a complicated model specification can over-fit in sample, i.e. the model can spuriously perform very well compared to simpler alternatives, we conduct the model comparison based on the out of sample forecasting performance. We consider a large variety of models and evaluation criteria, using real time data and a sophisticated bootstrap algorithm to evaluate the statistical significance of our results. Our main conclusion is that in general linear time series models can be hardly beaten if they are carefully specified, and therefore still provide a good benchmark for theoretical models of growth and inflation. However, we also identify some important cases where the adoption of a more complicated benchmark can alter the conclusions of economic analyses about the driving forces of GDP growth and inflation. Therefore, comparing theoretical models also with more sophisticated time series benchmarks can guarantee more robust conclusions.
    Keywords: growth; inflation; non-linear models; time-varying models
    JEL: C2 C53 E30
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:6012&r=ecm
  4. By: Catherine Doz (Université de Cergy-Pontoise (Théma)); Domenico Giannone (Université Libre de Bruxelles, ECARES and CEPR); Lucrezia Reichlin (European Central Bank, ECARES and CEPR)
    Abstract: This paper shows consistency of a two step estimator of the parameters of a dynamic approximate factor model when the panel of time series is large (n large). In the first step, the parameters are first estimated from an OLS on principal components. In the second step, the factors are estimated via the kalman smoother. This projection allows to consider dynamics in the factors and heteroskedasticity in the idiosyncratic variance. The analysis provides theoretical backing for the estimator considered in Giannone, Reichlin, and Sala (2004) and Giannone, Reichlin,and Small (2005).
    Keywords: Factor Models, Kalman filter, principal components, large cross-sections
    JEL: C51 C32 C33
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:ema:worpap:2006-23&r=ecm
  5. By: Akay, Alpaslan (Department of Economics, School of Business, Economics and Law, Göteborg University); Tsakas, Elias (Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: In this paper we discuss the differences between the average marginal effect and the marginal effect of the average individual in sample selection models, estimated by Heckman's two step procedure. We show that the bias that emerges as a consequence of interchanging them, could be very significant, even in the limit. We suggest a computationally cheap approximation method, which corrects the bias in a large extent. We illustrate the implications of our method with an empirical application of earnings assimilation and a small Monte Carlo simulation. <p>
    Keywords: Heckman's two step estimator; average marginal effect; marginal effect of the average individual; earnings assimilation
    JEL: C13 C15 J40
    Date: 2006–01–19
    URL: http://d.repec.org/n?u=RePEc:hhs:gunwpe:0239&r=ecm
  6. By: ALBERTO MAYDEU (Instituto de Empresa)
    Abstract: Asymptotic distribution free (ADF) interval estimators for coefficient alpha were introduced in the context of an application by Yuan, Guarnaccia, and Hayslip (2003). Here, simulation studies were performed to investigate the behavior of ADF vs. normal theory (NT) interval estimators of coefficient alpha for tests composed of ordered categorical items under varied conditions of sample size, item skewness and kurtosis, number of items, and average inter-item correlation. NT intervals were found to be inaccurate when item skewness > 1 or kurtosis > 4.
    Keywords: Coefficient omega, Likert-type items, Reliability
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:emp:wpaper:wp06-24&r=ecm
  7. By: Jean Boivin; Marc Giannoni
    Abstract: Standard practice for the estimation of dynamic stochastic general equilibrium (DSGE) models maintains the assumption that economic variables are properly measured by a single indicator, and that all relevant information for the estimation is summarized by a small number of data series. However, recent empirical research on factor models has shown that information contained in large data sets is relevant for the evolution of important macroeconomic series. This suggests that conventional model estimates and inference based on estimated DSGE models might be distorted. In this paper, we propose an empirical framework for the estimation of DSGE models that exploits the relevant information from a data-rich environment. This framework provides an interpretation of all information contained in a large data set, and in particular of the latent factors, through the lenses of a DSGE model. The estimation involves Markov-Chain Monte-Carlo (MCMC) methods. We apply this estimation approach to a state-of-the-art DSGE monetary model. We find evidence of imperfect measurement of the model's theoretical concepts, in particular for inflation. We show that exploiting more information is important for accurate estimation of the model's concepts and shocks, and that it implies different conclusions about key structural parameters and the sources of economic fluctuations.
    JEL: C10 C32 C53 E01 E32 E37
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberte:0332&r=ecm
  8. By: ALBERTO MAYDEU (Instituto de Empresa)
    Abstract: The classical method (Mosteller, 1951) for estimating Thurstone´s Case V model for ranking data consists in a) transforming the observed ranking patterns to patterns of binary paired comparisons, b) obtaining the normal deviate corresponding to the men of each binary variable, and c) estimate the model parameters from these deviates by least squares. However, classical procedures do not take into account the dependencies among the deviates and as a result, asymptotic standard errors (SEs) and goodness of fit (GOF) test are incorrect.
    Keywords: Categorical data analysis, Preference data, Random utility models
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:emp:wpaper:wp06-25&r=ecm
  9. By: Esther Duflo; Rachel Glennerster; Michael Kremer
    Abstract: This paper is a practical guide (a toolkit) for researchers, students and practitioners wishing to introduce randomization as part of a research design in the field. It first covers the rationale for the use of randomization, as a solution to selection bias and a partial solution to publication biases. Second, it discusses various ways in which randomization can be practically introduced in a field settings. Third, it discusses designs issues such as sample size requirements, stratification, level of randomization and data collection methods. Fourth, it discusses how to analyze data from randomized evaluations when there are departures from the basic framework. It reviews in particular how to handle imperfect compliance and externalities. Finally, it discusses some of the issues involved in drawing general conclusions from randomized evaluations, including the necessary use of theory as a guide when designing evaluations and interpreting results.
    JEL: C93 I0 J0 O0
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberte:0333&r=ecm
  10. By: David Jamieson Bolder
    Abstract: Modelling term-structure dynamics is an important component in measuring and managing the exposure of portfolios to adverse movements in interest rates. Model selection from the enormous term-structure literature is far from obvious and, to make matters worse, a number of recent papers have called into question the ability of some of the more popular models to adequately describe interest rate dynamics. The author, in attempting to find a relatively simple term-structure model that does a reasonable job of describing interest rate dynamics for risk-management purposes, examines two sets of models. The first set involves variations of the Gaussian affine term-structure model by modestly building on the recent work of Dai and Singleton (2000) and Duffee (2002). The second set includes and extends Diebold and Li (2003). After working through the mathematical derivation and estimation of these models, the author compares and contrasts their performance on a number of in- and out-of-sample forecasting metrics, their ability to capture deviations from the expectations hypothesis, and their predictions in a simple portfolio-optimization setting. He finds that the extended Nelson-Siegel model and an associated generalization, what he terms the "exponential-spline model," provide the most appealing modelling alternatives when considering the various model criteria.
    Keywords: Interest rates; Econometric and statistical methods; Financial markets
    JEL: C0 C6 E4 G1
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:06-48&r=ecm
  11. By: Q. Farooq Akram (Norges Bank (Central Bank of Norway)); Ragnar Nymoen (Department of Economics, University of Oslo)
    Abstract: We investigate the importance of employing a valid model for monetary policy analysis. Specifically, we investigate the economic significance of differences in specification and empirical validity of models. We consider three alternative econometric models of wage and price inflation in Norway. We find that differences in model specification as well as in parameter estimates across models can lead to widely different policy recommendations. We also find that the potential loss from basing monetary policy on a model that may be invalid, or on a suite of models, even when it contains the valid model, can be substantial, also when gradualism is exercised as a concession to model uncertainty. Furthermore, possible losses from such a practice appear to be greater than possible losses from failing to choose the optimal policy horizon to a shock within the framework of a valid model. Our results substantiate the view that a model for policy analysis should necessarily be empirically valid and caution against compromising this property for other desirable model properties, including robustness.
    Keywords: Model uncertainty; Econometric modelling; Economic significance; Robust monetary policy.
    JEL: C52 E31 E52
    Date: 2006–12–20
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2006_13&r=ecm
  12. By: Spencer D. Krane
    Abstract: Economic activity depends on agents' real-time beliefs regarding the persistence in the shocks they currently perceive to be hitting the economy. This paper uses an unobserved components model of forecast revisions to examine how the professional forecasters comprising the Blue Chip Economic Consensus have viewed such shocks to GDP over the past twenty years. The model estimates that these forecasters attribute more of the variance in the shock to GDP to permanent factors than to transitory developments. Both shocks are significantly correlated with incoming high-frequency indicators of economic activity; but for the permanent component, the correlation is driven by recessions or other periods when activity was weak. The forecasters' shocks also differ noticeably from those generated by some simple econometric models. Taken together, the results suggest that agents? expectations likely are based on broader information sets than those used to specify most empirical models and that the mechanisms generating expectations may differ with the perceived state of the business cycle.
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:fip:fedhwp:wp-06-19&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.