Posts Tagged ‘PubMed’

An analysis of retractions of scientific papers in India

August 12, 2011

From Professor T.A. Abinandanan on his blog Nanopolitan:

Scientific Misconduct in India: An Analysis of Retractions in PubMed

I presented this work at the Workshop on Academic Ethics organized by Rahul Siddharthan, Gautam Menon and N.S. Siddharthan about a month ago.

Quick summary: PubMed database lists ~103,000 papers published by Indian authors during the previous decade (2001-2010); 70 of these papers have been retracted, and 45 of the retractions are due to some form of misconduct. Plagiarism is overwhelmingly the primary mode of misconduct: all but one of the 45 misconduct-related retractions were due to plagiarism.

If that doesn’t sound bad enough, consider this: At 44 per 100,000 papers, India’smisconduct rate is far higher than that of countries such as the UK, the USA, Germany and Japan.

There’s some silver lining, though: Retraction of papers from Indian authors show a steep fall since 2007 — either because Indian researchers know better now, or because plagiarized papers are ever less likely to make it to print in the first place due to increasingly widespread use of plagiarism detecting software by journals.

Here’s the html version; if you prefer a pdf, get it from here.

US scientists more likely to publish fake research

November 16, 2010

A new on-line paper in the Journal of Medical Ethics has studied the PubMed database for all scientific research papers that had been retracted between 2000 and 2010.

Retractions in the scientific literature: do authors deliberately commit research fraud? by R Grant Steen J Med Ethics doi:10.1136/jme.2010.038125

Abstract

Background Papers retracted for fraud (data fabrication or data falsification) may represent a deliberate effort to deceive, a motivation fundamentally different from papers retracted for error. It is hypothesised that fraudulent authors target journals with a high impact factor (IF), have other fraudulent publications, diffuse responsibility across many co-authors, delay retracting fraudulent papers and publish from countries with a weak research infrastructure.

Methods All 788 English language research papers retracted from the PubMed database between 2000 and 2010 were evaluated. Data pertinent to each retracted paper were abstracted from the paper and the reasons for retraction were derived from the retraction notice and dichotomised as fraud or error. Data for each retracted article were entered in an Excel spreadsheet for analysis.

Results Journal IF was higher for fraudulent papers (p<0.001). Roughly 53% of fraudulent papers were written by a first author who had written other retracted papers (‘repeat offender’), whereas only 18% of erroneous papers were written by a repeat offender (χ=88.40; p<0.0001). Fraudulent papers had more authors (p<0.001) and were retracted more slowly than erroneous papers (p<0.005). Surprisingly, there was significantly more fraud than error among retracted papers from the USA (χ2=8.71; p<0.05) compared with the rest of the world.

Conclusions This study reports evidence consistent with the ‘deliberate fraud’ hypothesis. The results suggest that papers retracted because of data fabrication or falsification represent a calculated effort to deceive. It is inferred that such behaviour is neither naïve, feckless nor inadvertent.

PhysOrg summarises the paper:

The study author searched the PubMed database for every scientific research paper that had been withdrawn—and therefore officially expunged from the public record—between 2000 and 2010. A total of 788 papers had been retracted during this period. Around three quarters of these papers had been withdrawn because of a serious error (545); the rest of the retractions were attributed to fraud (data fabrication or falsification).

The highest number of retracted papers were written by US first authors (260), accounting for a third of the total. One in three of these was attributed to fraud.

The UK, India, Japan, and China each had more than 40 papers withdrawn during the decade. Asian nations, including South Korea, accounted for 30% of retractions. Of these, one in four was attributed to fraud.

The fakes were more likely to appear in leading publications with a high “impact factor.” This is a measure of how often research is cited in other peer reviewed journals. More than half (53%) of the faked research papers had been written by a first author who was a “repeat offender.” This was the case in only one in five (18%) of the erroneous papers.

The average number of authors on all retracted papers was three, but some had 10 or more. Faked research papers were significantly more likely to have multiple authors. Each first author who was a repeat fraudster had an average of six co-authors, each of whom had had another three retractions.

“The duplicity of some authors is cause for concern,” comments the author. Retraction is the strongest sanction that can be applied to published research, but currently, “[it] is a very blunt instrument used for offences both gravely serious and trivial.”