nep-sog New Economics Papers
on Sociology of Economics
Issue of 2012‒04‒23
two papers chosen by
Jonas Holmström
Swedish School of Economics and Business Administration

  1. Robust Ranking of Journal Quality: An Application to Economics By Chia-Lin Chang; Esfandiar Maasoumi; Michael McAleer
  2. The Relevance of the ‘h’ and ‘g’ Index to Economics in the Context of a Nation-wide Research Evaluation Scheme: The New Zealand Case By David L. Anderson; John Tressler

  1. By: Chia-Lin Chang (Department of Applied Economics, Department of Finance, National Chung Hsing University Taichung, Taiwan); Esfandiar Maasoumi (Department of Economics, Emory University); Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute, The Netherlands, Department of Quantitative Economics, Complutense University of Madrid, and Institute of Economic Research, Kyoto University)
    Abstract: The paper focuses on the robustness of rankings of academic journal quality and research impact in general, and in Economics, in particular, based on the widely-used Thomson Reuters ISI Web of Science citations database (ISI). The paper analyses 299 leading international journals in Economics using quantifiable Research Assessment Measures (RAMs), and highlights the similarities and differences in various RAMs, which are based on alternative transformations of citations. All existing RAMs to date have been static, so two new dynamic RAMs are developed to capture changes in impact factor over time and escalating journal self citations. Alternative RAMs may be calculated annually or updated daily to determine When, Where and How (frequently) published papers are cited (see Chang et al. (2011a, b, c)). The RAMs are grouped in four distinct classes that include impact factor, mean citations and non-citations, journal policy, number of high quality papers, and journal influence and article influence. These classes include the most widely used RAMs, namely the classic 2-year impact factor including journal self citations (2YIF), 2-year impact factor excluding journal self citations (2YIF*), 5-year impact factor including journal self citations (5YIF), Eigenfactor (or Journal Influence), Article Influence, h-index, and PI-BETA (Papers Ignored - By Even The Authors). As all existing RAMs to date have been static, two new dynamic RAMs are developed to capture changes in impact factor over time (5YD2 = 5YIF/2YIF) and Escalating Self Citations. We highlight robust rankings based on the harmonic mean of the ranks of RAMs across the 4 classes. It is shown that emphasizing the 2-year impact factor of a journal, which partly answers the question as to When published papers are cited, to the exclusion of other informative RAMs, which answer Where and How (frequently) published papers are cited, can lead to a distorted evaluation of journal quality, impact and influence relative to the harmonic mean of the ranks.
    Keywords: Research assessment measures, Impact factor, IFI, C3PO, PI-BETA, STAR, Eigenfactor, Article Influence, h-index, 5YD2, ESC, harmonic mean of the ranks, economics, journal rankings.
    JEL: C18 C81 Y10
    Date: 2012–03
  2. By: David L. Anderson (Queen's University); John Tressler (University of Waikato)
    Abstract: The purpose of this paper is to explore the relevance of the citation-based ‘h’ and ‘g’ indexes as a means for measuring research output in economics. This study is unique in that it is the first to utilize the ‘h’ and ‘g’ indexes in the context of a time limited evaluation period and to provide comprehensive coverage of all academic economists in all university-based economics departments within a nation state. For illustration purposes we have selected New Zealand’s Performance Based Research Fund (PBRF) as our evaluation scheme. In order to provide a frame of reference for ‘h’ and ‘g’ index output measures, we have also estimated research output using a number of journal-based weighting schemes. In general, our findings suggest that ‘h’ and ‘g’ index scores are strongly associated with low-powered journal ranking schemes and weakly associated with high powered journal weighting schemes. More specifically, we found the ‘h’ and ‘g’ indexes to suffer from a lack of differentiation: for example, 52 percent of all participants received a score of zero under both measures, and 92 and 89 percent received scores of two or less under ‘h’ and ‘g’, respectively. Overall, our findings suggest that ‘h’ and ‘g’ indexes should not be incorporated into a PBRF-like framework.
    Keywords: g and h indexes; bibliometrics; journal weighting schemes; PBRF; research measurement
    JEL: A19 C81 J24
    Date: 2012–04–15

This nep-sog issue is ©2012 by Jonas Holmström. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.