Scientific excellence can only truly be judged by history. But history has eyes only for impact and if excellent science causes no great change to science orthodoxy, it is soon forgotten. For a scientist the judgements of history long after he performs his science are of no real significance. Even where academic freedom is the main motivator for the scientist, the degrees of freedom available are related to academic success. An academic or scientific career depends increasingly on contemporaneus judgements – and here social networking, peer review and bibliometric factors are decisive. There may well be some correlation between academic success and the “goodness” of the scientist but it is not the success or the bibliometrics which are causative.
As Lars Walloe puts it: Walloe-on-Exellence
In the evaluation process many scientists and nearly all university and research council administrators love all kind of bibliometric tools. This has of course a simple explanation. The “bureaucracy” likes to have a simple quantitative tool, which can be used with the aid of a computer and the internet to give an “objective” measure of excellence. However, excellence is not directly related either to the impact factor of the journal in which the work is published, or to the number of citations, or to the number of papers published, or even to some other more sophisticated bibliometric indices. Of course there is some correlation, but it is in my judgement weaker than what many would like to believe, and uncritical use of these tools easily leads to wrong conclusions. For instance the impact factor of a journal is mainly determined by the very best papers published in it and not so much by the many ordinary papers published. We know well that even in high impact factor journals like Science and Nature or high impact journals in more specialized fields, from time to time not so excellent papers are being published.
….. I often meet scientists for whom to obtain high bibliometric factors serve as a prime guidance in their work. Too many of them are really not that good, but were just lucky or work in a field where it was easier to get many citations. …..If you are working with established methods in a popular field you can be fairly sure to get your papers published. I can mention in details some medical fields were I know that this has happened or is happening today. The scientists in such fields get a high number of publications and citations, but the research is not necessarily excellent.
And getting your paper published has now become so important in the advancement of an academic career that journals are proliferating. Many of the new journals have now shifted their business models to be based on author’s fees and not on volume of readership. This is a very “safe” business model since profits are ensured before the journal has even been published and if the journal is an on-line journal then costs are minimal. It is virtually the “self-publishing” of papers. You pay your money and get your paper published.
The reality today is that more papers are being published by more authors in more journals than ever before. But fewer are being actually read. Papers are cited without having been read – let alone understood.
Another reason could be that publishers, particularly those who charge authors fees for publishing, are in the business of making money.
Authoring journal articles is not only enhancing to one’s CV (the old “publish or perish” cliché), it is required by Residency Review Committees as evidence of “scholarly activity” in training programs. Maybe it’s good for attracting referrals too.
The publish or perish ethos has led to a proliferation of the number of authors per paper!
First noted in 1993 by a paper in Acta Radiologica and a letter in the BMJ, the number of authors per paper has risen dramatically over the years.
A study of 12 radiology journals found the number of authors per paper doubled from 2.2 in 1966 to 4.4 in 1991. A review of Neurosurgery and the Journal of Neurosurgery spanned 50 years. the average went from 1.8 authors per article in 1945 to 4.6 authors in 1995.
Of note, the above two articles were each written by a single author.
Three psychiatrists from Dartmouth analyzed original scientific articles in four of the most prestigious journals in the United States—Archives of Internal Medicine, Annals of Internal Medicine, Journal of the American Medical Association, and the New England Journal of Medicine—from 1980 to 2000. They found that the mean number of authors per paper increased from 4.5 to 6.9. The same is true for two plastic surgery journals, which saw the average number of authors go from 1.4 to 4.0 and 1.7 to 4.2 in the 50 years from 1955 to 2005. The number of single-author papers went from 78% to 3% in one journal and 51% to 8% another.
In orthopedics, a review of the American and British versions of the Journal of Bone and Joint Surgery for 60 years from 1949 to 2009 showed an increase of authors per paper from 1.6 to 5.1.
An impressive rise in the number of authors took place in two leading thoracic surgery journals. For the Journal of Thoracic and Cardiovascular Surgery the increase was 1.4 in 1936 to 7.5 2006 and for Annals of Thoracic Surgery it was 3.1 in 1966 to 6.8 in 2006.
And the winner is a paper with 3171 authors! Needles to say it comes from Big Science and the Large Hadron Collider:
the paper with the most authors is “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC” in a journal called “Physics Letters B” with 3171. The list of authors takes up 9 full pages.
Too many journals, too many papers, too many authors and too many citations. But that does not mean there is more excellence in science.
