nep-sog New Economics Papers
on Sociology of Economics
Issue of 2023‒10‒30
four papers chosen by
Jonas Holmström, Axventure AB

  1. Cite-seeing and reviewing: A study on citation bias in peer review. By Stelmakh, Ivan; Rastogi, Charvi; Liu, Ryan; Chawla, Shuchi; Shah, Nihar; Echenique, Federico
  2. Toward open science in PLS-SEM: Assessing the state of the art and future perspectives By Adler, Susanne Jana; Sharma, Pratyush N; Radomir, Lăcrămioara
  3. Defining, measuring, and rewarding scholarly impact: mind the level of analysis By Ramani, Ravi S.; Aguinis, Herman; Coyle-Shapiro, Jacqueline A.M.
  4. Scientific Background to the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2023 By Committee, Nobel Prize

  1. By: Stelmakh, Ivan; Rastogi, Charvi; Liu, Ryan; Chawla, Shuchi; Shah, Nihar; Echenique, Federico
    Abstract: Citations play an important role in researchers careers as a key factor in evaluation of scientific impact. Many anecdotes advice authors to exploit this fact and cite prospective reviewers to try obtaining a more positive evaluation for their submission. In this work, we investigate if such a citation bias actually exists: Does the citation of a reviewers own work in a submission cause them to be positively biased towards the submission? In conjunction with the review process of two flagship conferences in machine learning and algorithmic economics, we execute an observational study to test for citation bias in peer review. In our analysis, we carefully account for various confounding factors such as paper quality and reviewer expertise, and apply different modeling techniques to alleviate concerns regarding the model mismatch. Overall, our analysis involves 1, 314 papers and 1, 717 reviewers and detects citation bias in both venues we consider. In terms of the effect size, by citing a reviewers work, a submission has a non-trivial chance of getting a higher score from the reviewer: an expected increase in the score is approximately 0.23 on a 5-point Likert item. For reference, a one-point increase of a score by a single reviewer improves the position of a submission by 11% on average.
    Keywords: Humans, Prospective Studies, Peer Review, Bias, Research Personnel, Machine Learning, Peer Review, Research
    Date: 2023–01–01
  2. By: Adler, Susanne Jana (Ludwig-Maximilians-University Munich); Sharma, Pratyush N (The University of Alabama); Radomir, Lăcrămioara (Faculty of Economics and Business Administration, Babeș-Bolyai University)
    Abstract: Driven by the high-profile failures to reproduce and replicate published findings, there have been increasing demands to adopt open science practices across scientific disciplines in order to enhance research transparency. Critics have highlighted the use of underpowered studies and researchers’ analytical degrees of freedom as factors contributing to these issues. Despite methodological advances and updated guidelines, similar concerns persist regarding studies utilizing partial least squares structural equation modeling (PLS-SEM). Open science practices can help alleviate these concerns by facilitating transparency in PLS-SEM-based studies. However, the current level of adherence to these practices remains unknown. In this article, we conduct a comprehensive literature review of leading marketing journals to assess the extent to which open science practices are implemented in PLS-SEM-based studies. Based on the observed lack of adoption, we propose a PLS-SEM-specific preregistration template that researchers can use to foster transparency in their analyses, thereby bolstering confidence in their findings.
    Date: 2023–09–23
  3. By: Ramani, Ravi S.; Aguinis, Herman; Coyle-Shapiro, Jacqueline A.M.
    Abstract: We address the grossly incorrect inferences that result from using journal impact factor (JIF) as a proxy to assess individual researcher and article scholarly impact. This invalid practice occurs because of confusion about the definition and measurement of impact at different levels of analysis. Specifically, JIF is a journal-level measure of impact, computed by aggregating citations of individual articles (i.e., upward effect), and is therefore inappropriate when measuring impact at lower levels of analysis, such as that of individual researchers, or of individual articles published in a particular journal (i.e., downward effect). We illustrate the severity of the errors that occur when using JIF to evaluate individual scholarly impact, and advocate for an immediate moratorium on the exclusive use of JIF and other journal-level (i.e., higher level of analysis) measures when assessing the impact of individual researchers and individual articles (i.e., lower level of analysis). Given the importance and interest in assessing the scholarly impact of researchers and articles, we delineate level-appropriate and readily available measures.
    JEL: J50
    Date: 2022–09–21
  4. By: Committee, Nobel Prize (Nobel Prize Committee)
    Abstract: Women are severely underrepresented in the global labor market: around 50% of women work or actively seek work for income, compared to 80% for men. The gender differences in participation are fundamentally driven by variation in women’s participation rates – men’s participation rates are broadly constant across time and countries. The participation gaps between men and women are particularly large in South Asia, the Middle East, and North Africa, where they sometimes exceed 50 percentage points.
    Keywords: Gender in labor markets;
    JEL: J70 J71 J78
    Date: 2023–10–09

This nep-sog issue is ©2023 by Jonas Holmström. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.