nep-sog New Economics Papers
on Sociology of Economics
Issue of 2022‒12‒12
eight papers chosen by
Jonas Holmström
Axventure AB

  1. Top 25% Institutions and Economists in Viet Nam, as of June 2022 By Cranier, Louis
  3. Low and high-impact-factor journals: which has better peer review quality? By Nguyen, Minh-Hoang
  5. DOP: toward an open-source and informative alternative to the DOI linking system By Moustafa, Khaled
  6. Reducing barriers to open science by standardizing practices and realigning incentives By Adimoelja, Alvina; Athreya, Advait
  7. A brighter vision of the potential of open science for benefiting practice: A ManyOrgs Proposal By Castille, Christopher Michael; O'Boyle, Ernest; Köhler, Tine
  8. Beyond Academia: A case for reviews of gray literature for science-policy processes and applied research By Yoshida, Yuki; Sitas, Nadia; Mannetti, Lelani; O'Farrell, Patrick; Arroyo-Robles, Gabriela; Berbés-Blázquez, Marta; González-Jiménez, David; Nelson, Valerie; Niamir, Aidin; Harmáčková, Zuzana V.

  1. By: Cranier, Louis
    Abstract: Top 25% Institutions and Economists in Viet Nam, as of June 2022. The rankings: Top 25% institutions in Viet Nam, all authors, all publication years. For Viet Nam, there are 198 authors affiliated with 57 institutions. All institutions in this region.
    Date: 2022–07–08
  2. By: Bowbrick, Peter
    Abstract: Economics is drowning in a flood of bad books and papers, many of them written with the sole objective of getting another publication, many hurriedly written, sloppily researched, culpably negligent or fraudulent. Inevitably, good, honest economists are influenced by some of these, with the result that they produce bad economics themselves. Some waste their entire career by basing it on bad economics they learnt at university. Many journals are reluctant to publish refutations for commercial reasons. Refereeing is necessarily an imperfect system. A Journal of Economic Refutations would have the main objective of identifying bad economics so that researchers and professionals could avoid it, and so the public would not suffer the harm it causes. It would identify people publishing bad economics, so publishing sloppy, culpably negligent or fraudulent economics would damage their careers, rather than helping it. A refutation is worth a hundred normal papers, sometimes thousands. Another objective is show readers how to refute – a key professional skill for real-world economics.
    Keywords: Economic; Refutation; Journal
    JEL: A1 A10 A11 A2 A20
    Date: 2022–10–22
  3. By: Nguyen, Minh-Hoang
    Abstract: Do high-impact-factor journals have better peer review quality than low-impact-factor journals? Attempting to shed light on the relationship between the journal impact factor and quality, a group of researchers, led by Anna Severin (University of Bern), have recently examined 10,000 peer review reports submitted to 1,644 medical and life sciences journals. They use artificial intelligence to analyze two measures proxying the peer review quality: thoroughness and helpfulness. While sentences covering topic categories, materials and techniques, presentation and reporting, outcomes and discussion, importance and relevance are employed to gauge thoroughness, sentences that provide recommendations and solutions, examples, praise, or criticism are used to estimate helpfulness.
    Date: 2022–09–17
  4. By: Bowbrick, Peter
    Abstract: Much of economic literature is based on theory or evidence that has been refuted, and economists may spend years of their lives using long-discredited economics. It is, however, virtually impossible to find these refutations. It is proposed to set up a database of refutations, so that economists can check that the economics they use and the papers they cite, have not been refuted. This will also discourage economists from publishing papers that they know to be bad or carelessly written.
    Keywords: Database; Refutations
    JEL: A1 A10 A11 A14
    Date: 2022–10–22
  5. By: Moustafa, Khaled (Founder & Editor of ArabiXiv)
    Abstract: Digital object identifier (DOI) links in scientific publications are not informative. DOIs are not standardized to provide useful information about the published articles, authors’ names, journals’ names, and dates of publication (look at the DOI of this manuscript, it does not tell much about it). Besides, so far there is no free and open-source alternative to the current DOI system. Here, I suggest a new open and free linking and indexation system called DOP (for “Date Of Publication”) based on four main components of any scientific publication: 1) the main author’s name, 2) the journal’s name, 3) the date of publication (hence the denomination “DOP”), and 4) the time of publication. By using these four elements, DOP links are candidly informative, comparable and even memorable compared with the current links of DOI system. For example, a paper published by authors X in journal Y, on date D and time T can have a DOP link identifier as follows: DOP: authorX/JournalY/D/T where D refers to the date of publication per year, month and day, and T refers to the time of publication per hour, minute, and second. By using the date and time of publication per hour, minute and second, DOP links will be easily standardizable and always specific and unique among journals and authors and between them. Subsequently, a large bibliographic database can also be established based on DOP identification links for the indexation and archiving of scientific publications in the various scientific fields.
    Date: 2022–07–30
  6. By: Adimoelja, Alvina; Athreya, Advait (Massachusetts Institute of Technology)
    Abstract: Open science can accelerate the pace of research and contribute to a more equitable society. However, the current culture of scientific research is not optimally structured to promote extensive sharing of a range of outputs. In this policy position paper, we outline current open science practices and key bottlenecks in their broader adoption. We propose that national science agencies create a digital infrastructure framework that would standardize open science principles and make them actionable. We also suggest ways of redefining research success to align better with open science, and to incentivize a system where sharing various research outputs is beneficial to researchers.
    Date: 2022–07–12
  7. By: Castille, Christopher Michael (Nicholls State University); O'Boyle, Ernest; Köhler, Tine
    Abstract: Guzzo et al. (2022), in their focal article express concerns that rewarding open science practices, particularly in scholarly publishing, may harm the practical relevance of our research. They go on to urge greater reliance on conceptual replication over direct or exact replication to verify claims in our field. Although we concur with the majority of their recommendations, their prescriptions nevertheless do not fully address the deeper issue of publication and outcome reporting bias traceable to insufficient resources. Other sciences have effectively addressed this resource problem via crowdsourcing, large-scale collaborations, and multi-site replication (both conceptual and direct). Such initiatives are a pragmatic, if challenging to implement, solution to problems that face many areas of science such as ours (e.g., ensuring sufficient statistical power, assessing the generalizability and replicability of effects, spurring the uptake of open science practices, promoting diversity and inclusivity). Here, we propose that IO psychologists create such an initiative that primarily services practice. We tentatively call this initiative ‘ManyOrgs’. We also clarify how this open science initiative complements Guzzo et al.
    Date: 2022–08–04
  8. By: Yoshida, Yuki; Sitas, Nadia; Mannetti, Lelani; O'Farrell, Patrick; Arroyo-Robles, Gabriela; Berbés-Blázquez, Marta; González-Jiménez, David; Nelson, Valerie; Niamir, Aidin; Harmáčková, Zuzana V.
    Abstract: Gray literature is increasingly considered to complement evidence and knowledge from peer-reviewed literature for science-policy processes and applied research. On the one hand, science-policy assessments need to both consider a diversity of worldviews, knowledge types and values from a variety of sectors and actor groups, and synthesize policy-relevant findings that are salient, legitimate and credible. On the other hand, practitioners and scholars conducting applied research, especially in environmental and health-related fields, are affected by the time lag and documented biases of academic publication processes. While gray literature holds diverse perspectives that need to be integrated in science-policy processes as well as practical evidence unfiltered by commercial publication processes, its heterogeneity has made it challenging to access through conventional means for a literature review. This paper details one endeavor within the Values Assessment of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) to review gray literature using Google’s Programmable Search Engine. In the absence of a standardized approach, we build on a limited experiential knowledge base for reviewing gray literature and report on the potential applicability of our strategy for future reviews. Our results contrast the findings of our parallel review of academic literature, underlining the importance of mobilizing different knowledge bases in science-policy assessments, evidence-based practices, and applied research.
    Date: 2022–10–02

This nep-sog issue is ©2022 by Jonas Holmström. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.