Posts Tagged ‘Academic publishing’

Science is losing its ability to self-correct

October 20, 2013

With the explosion in the number of researchers, the increasing rush to publication and the corresponding explosion in traditional and on-line journals as avenues of publication, The Economist carries an interesting article making the point that the assumption that science is self-correcting is under extreme pressure. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.”

The field of psychology and especially social psychology has been much in the news with the dangers of “priming”.

“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.

Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.

It is not just “soft” fields which have problems. It is apparent that in medicine a large number of published results cannot be replicated

… irreproducibility is much more widespread. A few years ago scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. According to a piece they wrote last year in Nature, a leading scientific journal, they were able to reproduce the original results in just six. Months earlier Florian Prinz and his colleagues at Bayer HealthCare, a German pharmaceutical giant, reported in Nature Reviews Drug Discovery, a sister journal, that they had successfully reproduced the published results in just a quarter of 67 seminal studies.

The governments of the OECD, a club of mostly rich countries, spent $59 billion on biomedical research in 2012, nearly double the figure in 2000. One of the justifications for this is that basic-science results provided by governments form the basis for private drug-development work. If companies cannot rely on academic research, that reasoning breaks down. When an official at America’s National Institutes of Health (NIH) reckons, despairingly, that researchers would find it hard to reproduce at least three-quarters of all published biomedical findings, the public part of the process seems to have failed.

It is not just that research results cannot be replicated. So much of what is published is just plain wrong and the belief that science is self-correcting is itself under pressure

Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think. …… Statistical mistakes are widespread. The peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate. Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.” 

…… In 2005 John Ioannidis, an epidemiologist from Stanford University, caused a stir with a paper showing why, as a matter of statistical logic, the idea that only one such paper in 20 gives a false-positive result was hugely optimistic. Instead, he argued, “most published research findings are probably false.” 

The tendency to only publish positive results leads also to statistics being skewed to allow results to be shown as being poitive

The negative results are much more trustworthy; …….. But researchers and the journals in which they publish are not very interested in negative results. They prefer to accentuate the positive, and thus the error-prone. Negative results account for just 10-30% of published scientific literature, depending on the discipline. This bias may be growing. A study of 4,600 papers from across the sciences conducted by Daniele Fanelli of the University of Edinburgh found that the proportion of negative results dropped from 30% to 14% between 1990 and 2007. Lesley Yellowlees, president of Britain’s Royal Society of Chemistry, has published more than 100 papers. She remembers only one that reported a negative result.

…. Other data-heavy disciplines face similar challenges. Models which can be “tuned” in many different ways give researchers more scope to perceive a pattern where none exists. According to some estimates, three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”

The idea of peer-review being some kind of a quality check of the results being published is grossly optimistic

The idea that there are a lot of uncorrected flaws in published studies may seem hard to square with the fact that almost all of them will have been through peer-review. This sort of scrutiny by disinterested experts—acting out of a sense of professional obligation, rather than for pay—is often said to make the scientific literature particularly reliable. In practice it is poor at detecting many types of error.

John Bohannon, a biologist at Harvard, recently submitted a pseudonymous paper on the effects of a chemical derived from lichen on cancer cells to 304 journals describing themselves as using peer review. An unusual move; but it was an unusual paper, concocted wholesale and stuffed with clangers in study design, analysis and interpretation of results. Receiving this dog’s dinner from a fictitious researcher at a made up university, 157 of the journals accepted it for publication. ….

……. As well as not spotting things they ought to spot, there is a lot that peer reviewers do not even try to check. They do not typically re-analyse the data presented from scratch, contenting themselves with a sense that the authors’ analysis is properly conceived. And they cannot be expected to spot deliberate falsifications if they are carried out with a modicum of subtlety.

Fraud is very likely second to incompetence in generating erroneous results, though it is hard to tell for certain. 

And then there is the issue that all results from Big Science can never be replicated because the cost of the initial work is so high. Medical research or clinical trials are also extremely expensive. Journals have no great interest to publish replications (even when they are negative). And then, to compound the issue, those who provide funding are less likely to extend funding merely for replication or for negative results.

People who pay for science, though, do not seem seized by a desire for improvement in this area. Helga Nowotny, president of the European Research Council, says proposals for replication studies “in all likelihood would be turned down” because of the agency’s focus on pioneering work. James Ulvestad, who heads the division of astronomical sciences at America’s National Science Foundation, says the independent “merit panels” that make grant decisions “tend not to put research that seeks to reproduce previous results at or near the top of their priority lists”. Douglas Kell of Research Councils UK, which oversees Britain’s publicly funded research argues that current procedures do at least tackle the problem of bias towards positive results: “If you do the experiment and find nothing, the grant will nonetheless be judged more highly if you publish.” 

Trouble at the lab 

The rubbish will only decline when there is a cost to publishing shoddy work which outweighs the gains of adding to a researcher’s list of publications. At some point researchers will need to be held liable and accountable for their products (their publications). Not just for fraud or misconduct but even for negligence or gross negligence when they do not carry out their work using the best available practices of the field. These are standards that some (but not all) professionals are held to and there should be no academic researcher who is not also subject to such a standard. If peer-review is to recover some of its lost credibility then anonymous reviews must disappear and reviewers must be much more explicit about what they have checked and what they have not.

PeerJ – Open access Journal gets started

June 13, 2012

Open access is still evolving and the bottom-line is finding the revenue model that works. But open access is inevitable and the glory days of the high impact factor, pay-walled journals is coming to an end.   They will not disappear any time soon but history will show that their era was the 20th century and that their decline was the natural consequence of the world-wide-web.

PeerJ provides academics with two Open Access publication venues: PeerJ (a peer-reviewed academic journal) and PeerJ PrePrints (a ‘pre-print server’). Both are focused on the Biological and Medical Sciences, and together they provide an integrated solution for your publishing needs. Submissions open late Summer.

Reuters reports:

(more…)

Scientific retractions increasing sharply but is it due to better detection or increased misconduct?

October 5, 2011

Retractions of scientific papers is increasing sharply.

I am a strong believer in the Rule of the Iceberg where “whatever becomes visible is only 10% of all that exists”. And while I do not know if the number of retractions of scientific papers is increasing because detection methods are improved or because scientific misconduct is increasing, I am quite sure that the misconduct that is indicated by retractions is only a small part of all the misconduct that goes on.

What is clear however is that the world wide web provides a powerful new forum for the exercising of a check and a balance. It provides a hitherto unavailable method for mobilising resources from a wide and disparate group of individuals. The success of web sites such as Retraction Watch and Vroniplag are testimony to this. And the investigative power of the on-line community is particularly evident with Vroniplag as has been described by Prof.  Debora Weber-Wulff’s blog. And this investigative power – even if made up of “amateurs” in the on-line community – can bring to bear a vast and varying experience of techniques and expertise which – if harnessed towards a particular target – can function extremely rapidly. The recent on-line investigation and disclosure that an award winning nature photographer had been photo-shopping a great number of photographs of lynxes, wolves and raccoons and had invented stories about his encounters was entirely due to “amateurs” on the Flashback Forum in Sweden who very quickly created a web site to disclose all his trangressions and exactly how he had manipulated his images.

Nature addresses the subject of retractions today:

This week, some 27,000 freshly published research articles will pour into the Web of Science, Thomson Reuters’ vast online database of scientific publications. Almost all of these papers will stay there forever, a fixed contribution to the research literature. But 200 or so will eventually be flagged with a note of alteration such as a correction. And a handful — maybe five or six — will one day receive science’s ultimate post-publication punishment: retraction, the official declaration that a paper is so flawed that it must be withdrawn from the literature. … But retraction notices are increasing rapidly. In the early 2000s, only about 30 retraction notices appeared annually. This year, the Web of Science is on track to index more than 400 (see ‘Rise of the retractions’) — even though the total number of papers published has risen by only 44% over the past decade. …. 

…… When the UK-based Committee on Publication Ethics (COPE) surveyed editors’ attitudes to retraction two years ago, it found huge inconsistencies in policies and practices between journals, says Elizabeth Wager, a medical writer in Princes Risborough, UK, who is chair of COPE. That survey led to retraction guidelines that COPE published in 2009. But it’s still the case, says Wager, that “editors often have to be pushed to retract”. …… 

In surveys, around 1–2% of scientists admit to having fabricated, falsified or modified data or results at least once (D. Fanelli PLoS ONE4, e5738; 2009). But over the past decade, retraction notices for published papers have increased from 0.001% of the total to only about 0.02%. And, Ioannidis says, that subset of papers is “the tip of the iceberg” — too small and fragmentary for any useful conclusions to be drawn about the overall rates of sloppiness or misconduct.

Instead, it is more probable that the growth in retractions has come from an increased awareness of research misconduct, says Steneck. That’s thanks in part to the setting up of regulatory bodies such as the US Office of Research Integrity in the Department of Health and Human Services. These ensure greater accountability for the research institutions, which, along with researchers, are responsible for detecting mistakes.

The growth also owes a lot to the emergence of software for easily detecting plagiarism and image manipulation, combined with the greater number of readers that the Internet brings to research papers. In the future, wider use of such software could cause the rate of retraction notices to dip as fast as it spiked, simply because more of the problematic papers will be screened out before they reach publication. On the other hand, editors’ newfound comfort with talking about retraction may lead to notices coming at an even greater rate. …… 

Read the article

A graphic of retractions is here.

The academic and scientific community will – perforce – mirror the surrounding society it is embedded in. Standards of ethics and instances of misconduct will follow those of the surrounding environment. But the scientific community is somewhat protected in terms of not often having to bear liability for what they have published. Having to bear some responsibility and face liability for the quality of what they produce can be a force which will improve ethical standards immensely. Bringing incompetent or cheating scientists to book is not an attack on science. And it is what science needs to regain some of the reputation that has been tarnished in recent times. With the spotlight that is now available in the form of the world wide web, I expect the level of scrutiny to increase and this too can only be a force for the good.

Researchers show that peer review is easily corrupted

September 18, 2010
PhysicsWorld reports on a new paper:
Peer-review in a world with rational scientists: Toward selection of the average
by Stefan Thurner and Rudolf Hanel
1Section of Science of Complex Systems, Medical University of Vienna, Spitalgasse 23, A-1090, Austria

Just a small number of bad referees can significantly undermine the ability of the peer-review system to select the best scientific papers… Scholarly peer review is the commonly accepted procedure for assessing the quality of research before it is published in academic journals. It relies on a community of experts within a narrow field of expertise to have both the knowledge and the time to provide comprehensive reviews of academic manuscripts.Stefan Thurner and Rudolf Hanel at the Medical University of Vienna created a model of a generic specialist field where referees, selected at random, can fall into one of five categories. There are the “correct” who accept the good papers and reject the bad. There are the “altruists” and the “misanthropists”, who accept or reject all papers respectively. Then there are the “rational”, who reject papers that might draw attention away from their own work. And finally, there are the “random” who are not qualified to judge the quality of a paper because of incompetence or lack of time.Within this model community, the quality of scientists is assumed to follow a Gaussian distribution where each scientist produces one new paper every two time-units, the quality reflecting an author’s ability. At every step in the model, each new paper is passed to two referees chosen at random from the community, with self-review excluded, with a reviewer being allowed to either accept or reject the paper. The paper is published if both reviewers approve the paper, and rejected if they both do not like it. If the reviewers are divided, the paper gets accepted with a probability of 0.5.

Peer review gauntlet

Thurner and Hanel find that even a small presence of rational or random referees can significantly reduce the quality of published papers. Daniel Kennefick, a cosmologist at the University of Arkansas with a special interest in sociology, believes that the study exposes the vulnerability of peer review when referees are not accountable for their decisions.

Kennefick feels that the current system also encourages scientists to publish findings that may not offer much of an advance. “Many authors are nowadays determined to achieve publication for publication’s sake, in an effort to secure an academic position and are not particularly swayed by the argument that it is in their own interests not to publish an incorrect article.”

(This could have been written about Marc Hauser — https://ktwop.wordpress.com/2010/09/17/harvard-reviews-hausers-work-but-is-the-purpose-investigation-or-vindication/)

But Tim Smith, senior publisher for New Journal of Physics feels that the study overlooks the role of journal editors. “Peer-review is certainly not flawless and alternatives to the current process will continue to be proposed. In relation to this study however, one shouldn’t ignore the role played by journal editors and Boards in accounting for potential conflicts of interest, and preserving the integrity of the referee selection and decision-making processes,” he says.

In fact Journal Editors have much to answer for in the perversion of the peer review process which was revealed by Climategate. (The Hockey Stick Illusion by Andrew Montfordreviewed here)

Thurner argues that science would benefit from the creation of a “market for scientific work”. He envisages a situation where journal editors and their “scouts” search preprint servers for the most innovative papers before approaching authors with an offer of publication. The best papers, he believes, would naturally be picked up by a number of editors leaving it up to authors to choose their journal. “Papers that no-one wants to publish remain on the server and are open to everyone – but without the ‘prestigious’ quality stamp of a journal,” Thurner explains.

When reviewers show bias (in acceptance or in rejection) or misuse and hide behind the cloak of anonymity and are not required to be accountable then Hausergate and Climategate become inevitable.

Oliver Manuel comments: The most basic problem with ANONYMOUS peer-review is this: “That methodology is flawed and those flaws have been gradually undermining, corrupting, and trivializing American science for decades.” Anonymous peer review of papers and proposals has been steadily “undermining, corrupting, and trivializing American science” since I started my research career in 1960.

The evolution of peer review with the use of open servers in now overdue but is beginning.


%d bloggers like this: