Posts Tagged ‘John P. A. Ioannidis’

Nature: “US behavioural researchers exaggerate findings”

August 27, 2013

The field of behaviour within social psychology has not covered itself with glory in recent times. The cases of Diedrik Stapel and Dirk Smeesters and Marc Hauser are all too recent. But I have the perception that the entire field – globally – has been subject to exaggerations and the actions of narcissists. I had not perceived it as being a particular issue just for the US. But I would not be surprised if the “publish or perish” pressure is stronger in the US than in many other countries.

But a  new study publsished in PNAS has ” found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters. Nonbehavioral studies showed no such “US effect” …”

Fanelli, D. & Ioannidis, J. P. A.,US studies may overestimate effect sizes in softer research,  Proc. Natl Acad. Sci. USA  (2013), doi.org/10.1073/pnas.1302997110

Nature reports

US behavioural researchers have been handed a dubious distinction — they are more likely than their colleagues in other parts of the world to exaggerate findings, according to a study published today.

The research highlights the importance of unconscious biases that might affect research integrity, says Brian Martinson, a social scientist at the HealthPartners Institute for Education and Research in Minneapolis, Minnesota, who was not involved with the study. “The take-home here is that the ‘bad guy/good guy’ narrative — the idea that we only need to worry about the monsters out there who are making up data — is naive,” Martinson says.



The study, published in Proceedings of the National Academy of Sciences, was conducted by John Ioannidis, a physician at Stanford University in California, and Daniele Fanelli, an evolutionary biologist at the University of Edinburgh, UK. The pair examined 82 meta-analyses in genetics and psychiatry that collectively combined results from 1,174 individual studies. The researchers compared meta-analyses of studies based on non-behavioural parameters, such as physiological measurements, to those based on behavioural parameters, such as progression of dementia or depression.



The researchers then determined how well the strength of an observed result or effect reported in a given study agreed with that of the meta-analysis in which the study was included. They found that, worldwide, behavioural studies were more likely than non-behavioural studies to report ‘extreme effects’ — findings that deviated from the overall effects reported by the meta-analyses.
 And US-based behavioural researchers were more likely than behavioural researchers elsewhere to report extreme effects that deviated in favour of their starting hypotheses.



“We might call this a ‘US effect,’” Fanelli says. “Researchers in the United States tend to report, on average, slightly stronger results than researchers based elsewhere.”

This ‘US effect’ did not occur in non-behavioral research, and studies with both behavioural and non-behavioural components exhibited slightly less of the effect than purely behavioural research. Fanelli and Ioannidis interpret this finding to mean that US researchers are more likely to report strong effects, and that this tendency is more likely to show up in behavioural research, because researchers in these fields have more flexibility to make different methodological choices that produce more diverse results.

The study looked at a larger volume of research than has been examined in previous studies on bias in behavioural research, says Brian Nosek, a psychologist at the University of Virginia in Charlottesville. ….. 

Abstract

Many biases affect scientific research, causing a waste of resources, posing a threat to human health, and hampering scientific progress. These problems are hypothesized to be worsened by lack of consensus on theories and methods, by selective publication processes, and by career systems too heavily oriented toward productivity, such as those adopted in the United States (US). Here, we extracted 1,174 primary outcomes appearing in 82 meta-analyses published in health-related biological and behavioral research sampled from the Web of Science categories Genetics & Heredity and Psychiatry and measured how individual results deviated from the overall summary effect size within their respective meta-analysis. We found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters. Nonbehavioral studies showed no such “US effect” and were subject mainly to sampling variance and small-study effects, which were stronger for non-US countries. Although this latter finding could be interpreted as a publication bias against non-US authors, the US effect observed in behavioral research is unlikely to be generated by editorial biases. Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.

Related: Retraction Watch

Animal studies biased to give “positive” results

July 18, 2013

It is not suggested that the bias is any form of deliberate misconduct but a new paper shows that animal studies are subject to an “excess significance bias”.

Tsilidis KK, Panagiotou OA, Sena ES, Aretouli E, Evangelou E, et al. (2013) Evaluation of Excess Significance Bias in Animal Studies of Neurological Diseases. PLoS Biol 11(7): e1001609. doi:10.1371/journal.pbio.1001609

Author Summary

Studies have shown that the results of animal biomedical experiments fail to translate into human clinical trials; this could be attributed either to real differences in the underlying biology between humans and animals, to shortcomings in the experimental design, or to bias in the reporting of results from the animal studies. We use a statistical technique to evaluate whether the number of published animal studies with “positive” (statistically significant) results is too large to be true. We assess 4,445 animal studies for 160 candidate treatments of neurological disorders, and observe that 1,719 of them have a “positive” result, whereas only 919 studies would a priori be expected to have such a result. According to our methodology, only eight of the 160 evaluated treatments should have been subsequently tested in humans. In summary, we judge that there are too many animal studies with “positive” results in the neurological disorder literature, and we discuss the reasons and potential remedies for this phenomenon.

Roli Roberts writes at the PLOS blog:

But a study just published in PLOS Biology by Konstantinos Tsilidis, John Ioannidis and colleagues at Stanford University shows that a meta-analysis is only as good as the scientific literature that it uses. That literature seems to be compromised by substantial bias in the reporting of animal studies and may be giving us a misleading picture of the chances that potential treatments will work in humans. ….

Rather than wilful fraud, the authors of the PLOS Biology study suggest that this excess significance comes from two main sources. The first is that scientists conducting an animal study might analyse their data in several different ways, but ultimately tend to pick the method that gives them the “better” result. The second arises because scientists usually want to publish in higher profile journals that tend to strongly prefer studies with positive, rather than negative, results. This can delay or even prevent publication, or relegate the study to a low-visibility journal, all of which reduce their chances of inclusion in a meta-analysis.

The new work raises important questions about the way in which the scientific literature works, and it’s possible that the types of bias reported in the PLOS Biology paper have been responsible for the inappropriate movement of treatments from animal studies into human clinical trials. What do we do about it? Here are the authors’ suggestions:

  1. Animal studies should adhere to strict guidelines (such as the ARRIVE guidelines) as to study design and analysis.
  2. Availability of methodological details and raw data would make it easier for other scientists to verify published studies.
  3. Animal studies (like human clinical trials) should be pre-registered so that publication of the outcome, however negative, is ensured.

 Well, these are all excellent, but most people would also say that there are problems elsewhere in the system – in the high-profile journals’ desire to a have a cute story with well-defined conclusions, and in the forces exerted on authors by institutions and funding bodies to publish in those high-profile journals.