nep-sog New Economics Papers
on Sociology of Economics
Issue of 2013‒03‒16
seven papers chosen by
Jonas Holmström
Swedish School of Economics and Business Administration

  1. Coercive Journal Self Citations, Impact Factor, Journal Influence and Article Influence By Chia-Lin Chang; Michael McAleer; Les Oxley
  2. What Do Experts Know About Forecasting Journal Quality? A Comparison with ISI Research Impact in Finance By Chia-Lin Chang; Michael McAleer
  3. Ranking Law Journals and the Limits of Journal Citation Reports By Eisenberg, Theodore; Wells, Martin T.
  4. The New Zealand Performance Based Research Fund and its Impact on Publication Activity in Economics By David L. Anderson; John Tressler
  5. Trends and Directions in the Accounting, Business and Economic History of Spain, 1997-2011 By Bernardo Bátiz-Lazo; Rasol Eskandari
  6. Open Access, Social Norms & Publication Choice By Migheli Matteo; Ramello, Giovanni B.
  7. Zur Ethik von Rankings im Hochschulwesen: Eine Betrachtung aus ökonomischer Perspektive By Müller, Harry

  1. By: Chia-Lin Chang (Department of Applied Economics Department of Finance National Chung Hsing University, Taiwan); Michael McAleer (Econometric Institute Erasmus School of Economics Erasmus University Rotterdam and Tinbergen Institute The Netherlands and Department of Quantitative Economics Complutense University of Madrid and Institute of Economic Research Kyoto University); Les Oxley (Department of Economics University of Waikato New Zealand)
    Abstract: This paper examines the issue of coercive journal self citations and the practical usefulness of two recent journal performance metrics, namely the Eigenfactor score, which may be interpreted as measuring “Journal Influence”, and the Article Influence score, using the Thomson Reuters ISI Web of Science (hereafter ISI) data for 2009 for the 200 most highly cited journals in each of the Sciences and Social Sciences. The paper also compares the two new bibliometric measures with two existing ISI metrics, namely Total Citations and the 5- year Impact Factor (5YIF) (including journal self citations) of a journal. It is shown that the Sciences and Social Sciences are different in terms of the strength of the relationship of journal performance metrics, although the actual relationships are very similar. Moreover, the journal influence and article influence journal performance metrics are shown to be closely related empirically to the two existing ISI metrics, and hence add little in practical usefulness to what is already known, except for eliminating the pressure arising from coercive journal self citations. These empirical results are compared with existing results in the bibliometrics literature.
    Keywords: Journal performance metrics, Coercive journal self citations, Research assessment measures, Total citations, 5-year impact factor (5YIF), Eigenfactor, Journal influence, Article influence.
    JEL: A12
    Date: 2013–03
  2. By: Chia-Lin Chang (Department of Applied Economics Department of Finance National Chung Hsing University, Taiwan); Michael McAleer (Econometric Institute Erasmus School of Economics Erasmus University Rotterdam and Tinbergen Institute The Netherlands and Department of Quantitative Economics Complutense University of Madrid and Institute of Economic Research Kyoto University)
    Abstract: Experts possess knowledge and information that are not publicly available. The paper is concerned with forecasting academic journal quality and research impact using a survey of international experts from a national project on ranking academic finance journals in Taiwan. A comparison is made with publicly available bibliometric data, namely the Thomson Reuters ISI Web of Science citations database (hereafter ISI) for the Business - Finance (hereafter Finance) category. The paper analyses the leading international journals in Finance using expert scores and quantifiable Research Assessment Measures (RAMs), and highlights the similarities and differences in the expert scores and alternative RAMs, where the RAMs are based on alternative transformations of citations taken from the ISI database. Alternative RAMs may be calculated annually or updated daily to answer the perennial questions as to When, Where and How (frequently) published papers are cited (see Chang et al. (2011a, b, c)). The RAMs include the most widely used RAM, namely the classic 2-year impact factor including journal self citations (2YIF), 2-year impact factor excluding journal self citations (2YIF*), 5-year impact factor including journal self citations (5YIF), Immediacy (or zero-year impact factor (0YIF)), Eigenfactor, Article Influence, C3PO (Citation Performance Per Paper Online), h-index, PI-BETA (Papers Ignored - By Even The Authors), 2-year Self-citation Threshold Approval Ratings (2Y-STAR), Historical Self-citation Threshold Approval Ratings (H-STAR), Impact Factor Inflation (IFI), and Cited Article Influence (CAI). As data are not available for 5YIF, Article Influence and CAI for 13 of the leading 34 journals considered, 10 RAMs are analysed for 21 highly-cited journals in Finance. The harmonic mean of the ranks of the 10 RAMs for the 34 highly-cited journals are also presented. It is shown that emphasizing the 2-year impact factor of a journal, which partly answers the question as to When published papers are cited, to the exclusion of other informative RAMs, which answer Where and How (frequently) published papers are cited, can lead to a distorted evaluation of journal impact and influence relative to the Harmonic Mean rankings. A linear regression model is used to forecast expert scores on the basis of RAMs that capture journal impact, journal policy, the number of high quality papers, and quantitative information about a journal. The robustness of the rankings is also analysed.
    Keywords: Expert scores, Journal quality, RAMs, Impact factor, IFI, C3PO, PI-BETA, STAR, Eigenfactor, Article Influence, h-index, harmonic mean, robustness.
    JEL: C18 C81 C83
    Date: 2013–03
  3. By: Eisenberg, Theodore; Wells, Martin T.
    Abstract: Rankings of schools, scholars, and journals emphasize ordinal rank. Journal rankings published by Journal Citation Reports (JCR) are widely used to assess research quality, which influences important decisions by academic departments, universities, and countries. We study refereed law journal rankings by JCR, Washington and Lee Law Library (W&L), and the Australian Research Council (ARC). Both JCR’s and W&L’s multiple measures of journals can be represented by a single latent factor. Yet JCR’s rankings are uncorrelated with W&L’s. The differences appear to be attributable to underrepresentation of law journals in JCR’s database. We illustrate the effects of database bias on rankings through case studies of three elite journals, the Journal of Law & Economics, Supreme Court Review, and the American Law & Economics Review. Cluster analysis is a supplement to ordinal ranking and we report the results of a cluster analysis of law journals. The ARC does organize journals into four large groups and provides generally reasonable rankings of journals. But anomalies exist that could be avoided by checking the ARC groups against citation-based measures. Entities that rank should use their data to provide meaningful clusters rather than providing only ordinal ranks.
    Keywords: rankings, journals, research evaluation
    JEL: O31 C15 D02 L89
    Date: 2013–01
  4. By: David L. Anderson (Queen's University); John Tressler (University of Waikato)
    Abstract: New Zealand’s academic research assessment scheme, the Performance Based Research Fund (PBRF), was launched in 2002 with the stated objective of increasing research quality in the nation’s universities. Evaluation rounds were conducted in 2003, 2006 and 2012. In this paper, we employ 22 different journal weighting schemes to generate output estimates of refereed journal paper and page production over three six year periods (1994-1999; 2000-2005 and 2006-2011). These time periods reflect a pre-PBRF environment, a mixed assessment period, and a pure PBRF research environment, respectively. Our findings indicate that, on average, research productivity, defined in either paper or page terms, has increased since the introduction of the PBRF. However, this outcome is due to a major increase in the quantity of papers and pages produced per capita that has more than off-set a decline in the quality of published outputs since the introduction of the PBRF. In other words, our findings suggest that the PBRF has failed to achieve its stated goal of increasing average research quality, but it has resulted in substantial gains in productivity achieved via large increases in the quantity of refereed journal articles.
    Keywords: research measurement; PBRF; research quality; research assessment exercises
    JEL: A11 A14 C81 J24
    Date: 2013–02–28
  5. By: Bernardo Bátiz-Lazo (Prifysgol Bangor University, Wales, UK); Rasol Eskandari (Salford University, Manchester, UK)
    Abstract: This paper examines the determinants of citation success among authors who have published on the economic and business history of Spain. It departs from the dominant cross section approach to the quantitative assessment of citation success by enabling a 15-year time series analysis of peer-reviewed Spanish and Latin American outlets. Moreover, it considers working papers published online and assesses the role of Spanish as a medium to communicate with an international audience. Our results suggest a high concentration of publications and citations in a small number of authors (including non-residents) and the importance of local journals in citation success. Besides offering clues about how to improve one’s scientific impact, our citation analysis also sheds light on the state of the field of economic and business history in Spanish economic circles and attests the role of Spain as an intermediate country in the production and diffusion of scientific knowledge.
    Keywords: knowledge diffusion, electronic publishing, citation indexes, bibliometrics (publication scores), impact, Spain
    JEL: A11 N0 N8 M4 O31
    Date: 2013–02
  6. By: Migheli Matteo; Ramello, Giovanni B.
    Abstract: The aim of this paper is to shed light on scholarly communication and its current trajectories by examining academics’ perception of Open Access, while also providing a reference case for studying social norm change. In this respect, the issue of publication choice and the role of Open Access journals casts light on the changes affecting the scientific community and its institutional arrangements for validating and circulating new research. The empirical investigation conducted also offers a useful vantage point for gauging the importance of localised social norms in guiding and constraining behaviour.
    Keywords: Open Access, Scholarly Publication, Social Norms, Academics' Behavior, Economics of Science
    JEL: K19 Z13 O33 L17
    Date: 2013–02
  7. By: Müller, Harry
    Abstract: In der kontrovers geführten Debatte über den Sinn und Unsinn von Rankings und über die Validität bibliometrischer Indikatoren wird immer wieder auch auf unintendierte Konsequenzen von Rankings für das Wissenschaftssystem hingewiesen. Vor diesem Hintergrund wird sowohl deren Erstellung und Publikation als auch deren Verwendung im Rahmen von hochschulinternen oder hochschulpolitischen Entscheidungsprozessen zu einem ethischen Problem. Dem Informationsbedarf von Hochschulleitungen und externen Stakeholdern sowie dem Erkenntnisinteresse von Hochschulforschern steht die Forderung der Wissenschaftler nach akademischer Freiheit, die auch als eine Freiheit von einem ständigen Controlling durch Rankings verstanden werden kann, entgegen. Nur durch einen ethisch reflektierten Umgang mit Rankings kann vermieden werden, dass sich durch sie ein unerwünschter Paradigmenwechsel im Publikationsverhalten verfestigt, der nachträglich wohl nur noch schwer zu korrigieren sein dürfte. -- In the controversial debate about the pros and cons of rankings and the validity of bibliometric indicators, attention is also drawn to the unintended consequences of rankings for the science system. Against this background, both the creation and publication of rankings and its use in institutional or political decision-making lead to ethical problems. The information needs of university managements and external stakeholders as well as the scientific interest of higher education researchers contrast the academic freedom of the professors, who claim to be independent from extensive monitoring by rankings. Therefore, rankings should be handled with ethical reflection. Otherwise they might cause an undesirable paradigm shift in publishing behaviour that might probably be hard to correct afterwards.
    JEL: I23 M00 M50 Z10
    Date: 2013

This nep-sog issue is ©2013 by Jonas Holmström. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.