|
on Sociology of Economics |
Issue of 2011‒11‒07
three papers chosen by Jonas Holmström Swedish School of Economics and Business Administration |
By: | Ismael Rafols; Loet Leydesdorff; Alice O'Hare; Paul Nightingale; Andy Stirling |
Abstract: | This study provides new quantitative evidence on how journal rankings can disadvantage interdisciplinary research during research evaluations. Using publication data, it compares the degree of interdisciplinarity and the research performance of innovation studies units with business and management schools in the UK. Using various mappings and metrics, this study shows that: (i) innovation studies units are consistently more interdisciplinary than business and management schools; (ii) the top journals in the Association of Business Schools’ rankings span a less diverse set of disciplines than lower ranked journals; (iii) this pattern results in a more favourable performance assessment of the business and management schools, which are more disciplinary-focused. Lastly, it demonstrates how a citation-based analysis challenges the ranking-based assessment. In summary, the investigation illustrates how ostensibly ‘excellence-based’ journal rankings have a systematic bias in favour of mono-disciplinary research. The paper concludes with a discussion of implications of these phenomena, in particular how resulting bias is likely to affect negatively the evaluation and associated financial resourcing of interdisciplinary organisations, and may encourage researchers to be more compliant with disciplinary authority. |
Keywords: | Interdisciplinary, Evaluation, Ranking, Innovation, Bibliometrics, REF |
JEL: | A12 O30 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:aal:abbswp:11-05&r=sog |
By: | Henry Sauermann; Michael Roach |
Abstract: | A growing body of research on firms’ “open science” strategies rests on the notion that scientists have a strong preference for publishing and that firms are able to extract a wage discount if they allow scientists to publish. Drawing on a survey of 1,400 life scientists about to enter the job market, we suggest an alternative view. First, we show significant heterogeneity in the price scientists assign to the opportunity to publish in firms, and those scientists who seek industry careers have particularly low preferences for publishing. Thus, many job applicants are not willing to accept lower wages for jobs that let them publish and firms pursuing open science strategies may instead have to pay publishing incentives that fulfill both sorting and incentive functions. Second, we show that scientists with higher ability have a higher price of publishing but also expect to be paid higher wages regardless of the publishing regime. Thus, they are not cheaper to hire than other scientists if allowed to publish, but they are more expensive if publishing is restricted. Finally, we show that scientists publish not simply for “peer recognition” but also for more specific reasons, including the opportunity to advance science or to move to higher-paying jobs. Different reasons predict what price a scientist assigns to the opportunity to publish and may also have very different implications for the sustainability of competitive advantages derived from open science strategies. |
Keywords: | Scientists; publishing; competitive advantage |
JEL: | O31 L82 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:aal:abbswp:11-03&r=sog |
By: | Peter C.B. Phillips (Cowles Foundation, Yale University) |
Abstract: | Learned societies commonly carry out selection processes to add new fellows to an existing fellowship. Criteria vary across societies but are typically based on subjective judgments concerning the merit of individuals who are nominated for fellowships. These subjective assessments may be made by existing fellows as they vote in elections to determine the new fellows or they may be decided by a selection committee of fellows and officers of the society who determine merit after reviewing nominations and written assessments. Human judgment inevitably plays a central role in these determinations and, notwithstanding its limitations, is usually regarded as being a necessary ingredient in making an overall assessment of qualifications for fellowship. The present paper suggests a mechanism by which these merit assessments may be complemented with a quantitative rule that incorporates both subjective and objective elements. The goal of 'measuring merit' may be elusive but quantitative assessment rules can help to widen the effective electorate (for instance, by including the decisions of editors, the judgments of independent referees, and received opinion about research) and mitigate distortions that can arise from cluster effects, invisible college coalition voting and inner sanctum bias. The rule considered here is designed to assist the selection process by explicitly taking into account subjective assessments of individual candidates for election as well as direct quantitative measures of quality obtained from bibliometric data. The methodology has application to a wide arena of quality assessment and professional ranking exercises. |
Keywords: | Bibliometric data, Election, Fellowship, Measurement, Meritocracy, Peer review, Quantification, Subjective assessment, Voting |
JEL: | A14 Z13 |
Date: | 2011–10 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1833&r=sog |