Posts Tagged ‘peer review’

The Lancet: Scientists are “not incentivised to be right”

April 19, 2015

In time, incorrect results get corrected. In time, bad science cannot prevail – or so the belief is. But if all the articles about fraud in funding applications, dodgy peer review, predatory journals, confirmation bias and plain fraud in science are only half true, then most of what is reported as current science is not worth the paper it (isn’t) written on. Results reported are not concerned about being correct but about getting the next tranche of funding. “Politically” correct beliefs are not challenged by younger researchers because research funding will be jeopardised if “authority” is challenged. “Peer reviews” become “pal reviews” and even “self reviews”. Journals manipulate impact factors by “pal citations”.

It should all get corrected in time, except that publication of corroborating results is discouraged as not being original while “negative results” are not considered worthy of publication. A generally accepted but incorrect hypothesis then never gets corrected until an opposing theory is “proven” with positive results.

“In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world”.

These two articles, one in The Lancet and one in SMH illustrate the point:

1. The Lancet: What is medicine’s 5 sigma?

“A lot of what is published is incorrect.” I’m not allowed to say who made this remark because we were asked to observe Chatham House rules. We were also asked not to take photographs of slides. Those who worked for government agencies pleaded that their comments especially remain unquoted, since the forthcoming UK election meant they were living in “purdah”—a chilling state where severe restrictions on freedom of speech are placed on anyone on the government’s payroll. Why the paranoid concern for secrecy and non-attribution? Because this symposium—on the reproducibility and reliability of biomedical research, held at the Wellcome Trust in London last week—touched on one of the most sensitive issues in science today: the idea that something has gone fundamentally wrong with one of our greatest human creations.

The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”. The Academy of Medical Sciences, Medical Research Council, and Biotechnology and Biological Sciences Research Council have now put their reputational weight behind an investigation into these questionable research practices. The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data. Journal editors deserve their fair share of criticism too. We aid and abet the worst behaviours. Our acquiescence to the impact factor fuels an unhealthy competition to win a place in a select few journals. Our love of “significance” pollutes the literature with many a statistical fairy-tale. We reject important confirmations. Journals are not the only miscreants. Universities are in a perpetual struggle for money and talent, endpoints that foster reductive metrics, such as high-impact publication. National assessment procedures, such as the Research Excellence Framework, incentivise bad practices. And individual scientists, including their most senior leaders, do little to alter a research culture that occasionally veers close to misconduct. 

Can bad scientific practices be fixed? Part of the problem is that no-one is incentivised to be right. ……… 

2. SMH: How Australian scientists are bending the rules to get research funding

“Science has become really opaque, especially when it comes to grant funding”, says UNSW climate researcher Ben McNeil. As a result innovation suffers, he says. 

The offences in question range from junior scientists ghost-writing grant applications for senior colleagues to researchers conspiring with others to influence who might review their work.

In one extreme case a cancer scientist discovered his unfunded project idea had been stolen and used by another research group a year later.

The two major schemes that fund research in Australia – the National Health and Medical Research Council (NHMRC) and the Australian Research Council (ARC) – hand out about 1.5 billion dollars a year. The impact of these grants is almost impossible to quantify, but some have resulted in big medical discoveries such as the cervical cancer vaccine and new cancer treatments. They also generate new knowledge, jobs and industries.

While their $1.5 billion budget seems hefty, together the ARC and NHMRC reject about four out of every five ideas each year. In 2013, only 1883 ideas out 9004 received funding. The consequence of researchers’ attempts to “game” the system is that, if undetected, precious money may be allocated to unworthy research projects, potentially at the expense of the next lifesaving vaccine. …… 

While the NHMRC and the ARC say they have no evidence that “gaming” is widespread, a recent survey of 200 health and medical researchers suggests this may not be the case.

Before handing out money, both bodies ask panels of anonymous experts to assess project ideas, as well as the calibre of the people who propose the idea.

When public health researcher Adrian Barnett and two colleagues surveyed researchers about whether they form alliances with others to boost their chances of a better review, they were shocked to see one in five admitted to the practice.

“I knew it was going on, but I didn’t think it’d be as high,” says Barnett, from the Queensland University of Technology.

Advertisements

Adding a “total asshole” to your author list can get you published

October 10, 2014

This is reblogged from Retraction Watch.

How do you “peer review” a paper written by a “total asshole”? Presumably there are sufficient peers available. 

Retraction Watch:

When science writer Vito Tartamella noticed a physics paper co-authored by Stronzo Bestiale (which means “total asshole” in Italian) he did what anyone who’s written a book on surnames would do: He looked it up in the phonebook.

What he found was a lot more complicated than a funny name.

It turns out Stronzo Bestiale doesn’t exist.

In 1987, Lawrence Livermore National Lab physicist William G. Hoover had a paper on molecular dynamics rejected by two journals: Physical Review Letters and theJournal of Statistical Physics. So he added Stronzo Bestiale to the list of co-authors, changed the name, and resubmitted the paper. The Journal of Statistical Physicsaccepted it.

27 years later, Bestiale is still listed as co-author on several papers. He also has a Scopus profile that lists him as an active researcher at the Institute of Experimental Physics, University of Vienna.

This isn’t the first time a scientist has added a fictional co-author to a paper to make a point. In 1978, Polly Matzinger added her impeccably-named Afghan hound, Galadriel Mirkwood, to a Journal of Experimental Medicine paper to protest the use of passive voice in scientific papers.

Hilarious as these examples are, it does prove a point that’s a little less fun: The scientific community needs to be on its toes about who (or what) is writing the papers they publish, to help keep merde out of the literature.

Peer review as the erroneous comments of anonymous experts

May 28, 2014

There is a presumed halo around peer review which is quite unjustified. And when a publish or perish attitude prevails in academia it is inevitable that political correctness – as defined by the “peers” – colours whatever gets published. And “political correctness”  in science leads to a stamp of approval for what fits with the “consensus”. Nothing revolutionary can get through. Anything which smacks of being “heretical” has little chance of passing “peer review”.

 In 1936, Albert Einstein—who was used to people like Planck making decisions about his papers without outside opinions—was incensed when the American journal Physical Review sent his submission to another physicist for evaluation. In a terse note to the editor, Einstein wrote: “I see no reason to address the—in any case erroneous—comments of your anonymous expert. On the basis of this incident I prefer to publish the paper elsewhere.”

Melinda Baldwin considers the question “Did Isaac Newton need peer review

Peer review at scholarly journals involves recruiting experts to evaluate a paper before it is approved for publication. When a paper is submitted, the editors send it to two or three reviewers who are considered knowledgeable about the topic. The reviewers and the authors, in theory, do not know each others’ identities. If the reviewers raise objections to the methods or conclusions, the authors must revise the paper before it will be accepted for publication. If the objections are significant, the paper is rejected.

Most observers regard non-peer-reviewed results as, at best, preliminary. Instinctively, this makes sense. When a paper is printed in a scientific journal, it acquires the “imprimatur of scientific authenticity” (to quote the physicist John Ziman) and many observers consider its findings to be established scientific facts. It seems like a good idea to subject a paper to expert scrutiny before granting it that sort of status.

But it turns out that peer review is only the scientific community’s most recent method of providing this scrutiny—and it’s worth asking if science is, in fact, “real” only if it’s been approved by anonymous referees.

…. Nature published some papers without peer review up until 1973. In fact, many of the most influential texts in the history of science were never put through the peer review process, including Isaac Newton’s 1687 Principia Mathematica, Albert Einstein’s 1905 paper on relativity, and James Watson and Francis Crick’s 1953 Nature paper on the structure of DNA. ….

……… Peer review’s history is of particular interest now because there is an increasing sense in the scientific community that all is not well with the peer review process. In recent years, high-profile papers have passed peer review only to be heavily criticized after publication (such as the 2011 “arsenic DNA” paper in Science that claimed a particular bacterium could incorporate arsenic into its DNA—a finding most biologists have since rejected). Others have been retracted amid allegations of fraud (consider the now-infamous 1998 Lancet paper claiming a link between vaccines and autism). Many scientists worry that requiring approval from colleagues makes it less likely that new or controversial ideas will be published. Nature’s former editor John Maddox was fond of saying that the groundbreaking 1953 DNA paper would never have made it past modern peer review because it was too speculative. ….

“Peers” – and especially since they have to be knowledgeable in the field – always have some vested interest. It could be to defend their own work, or to publicise their own work, or to gain support for their own funding, to help young researchers get published, or to hinder others. Careers can be enhanced or destroyed by aiding or preventing publication. Anonymity also means that there is no accountability for the consequences of the reviewer’s views. Inevitably nothing revolutionary that may be attacked by an influential reviewer can even be submitted for publication. And therein lies the problem with “politically correct” science.

Now with the ease of on-line publication increasing, pre-publication, anonymous peer review is obsolete and has to give way to post-publication, attributable review.

“Peer review is a regression to the mean. ….. a completely corrupt system” – Sydney Brenner

March 2, 2014
Sydney Brenner

Sydney Brenner, CH FRS (born 13 January 1927) is a biologist and a 2002 Nobel prize laureate, shared with H. Robert Horvitz and John Sulston. Brenner made significant contributions to work on the genetic code, and other areas of molecular biology at the Medical Research Council Unit in Cambridge, England.

A fascinating interview with Professor Sydney Brenner by Elizabeth Dzeng in the Kings Review.

I find his comments on Academia and publishing and peer review  particularly apposite. Peer review – especially where cliques of “peers” determine “correct thinking” – can not provide sufficient room for the dissenting view, for the challenging of orthodoxy. Orthodox but incorrect views thus persist for much longer than they should. Completely new avenues are effectively blocked and ideas are still-born.

Some extracts here but the whole conversation is well worth a read.

How Academia and Publishing are Destroying Scientific Innovation: A Conversation with Sydney Brenner

by Elizabeth Dzeng, February 24th

I recently had the privilege of speaking with Professor Sydney Brenner, a professor of Genetic medicine at the University of Cambridge and Nobel Laureate in Physiology or Medicine in 2002. ….

SB: Today the Americans have developed a new culture in science based on the slavery of graduate students. Now graduate students of American institutions are afraid. He just performs. He’s got to perform. The post-doc is an indentured labourer. We now have labs that don’t work in the same way as the early labs where people were independent, where they could have their own ideas and could pursue them.

The most important thing today is for young people to take responsibility, to actually know how to formulate an idea and how to work on it. Not to buy into the so-called apprenticeship. I think you can only foster that by having sort of deviant studies. ……..

…… I think I’ve often divided people into two classes: Catholics and Methodists. Catholics are people who sit on committees and devise huge schemes in order to try to change things, but nothing’s happened. Nothing happens because the committee is a regression to the mean, and the mean is mediocre. Now what you’ve got to do is good works in your own parish. That’s a Methodist. 

ED: …….. It is alarming that so many Nobel Prize recipients have lamented that they would never have survived this current academic environment. What is the implication of this on the discovery of future scientific paradigm shifts and scientific inquiry in general? I asked Professor Brenner to elaborate.

SB: He wouldn’t have survived. It is just the fact that he wouldn’t get a grant today because somebody on the committee would say, oh those were very interesting experiments, but they’ve never been repeated. And then someone else would say, yes and he did it a long time ago, what’s he done recently?  And a third would say, to top it all, he published it all in an un-refereed journal.

So you know we now have these performance criteria, which I think are just ridiculous in many ways. But of course this money has to be apportioned, and our administrators love having numbers like impact factors or scores. ….

……. And of course all the academics say we’ve got to have peer review. But I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean.

I think peer review is hindering science. In fact, I think it has become a completely corrupt system. It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists. There are universities in America, and I’ve heard from many committees, that we won’t consider people’s publications in low impact factor journals.

Now I mean, people are trying to do something, but I think it’s not publish or perish, it’s publish in the okay places [or perish]. And this has assembled a most ridiculous group of people.

…….. I think there was a time, and I’m trying to trace the history when the rights to publish, the copyright, was owned jointly by the authors and the journal. Somehow that’s why the journals insist they will not publish your paper unless you sign that copyright over. It is never stated in the invitation, but that’s what you sell in order to publish. And everybody works for these journals for nothing. There’s no compensation. There’s nothing. They get everything free. They just have to employ a lot of failed scientists, editors who are just like the people at Homeland Security, little power grabbers in their own sphere.

If you send a PDF of your own paper to a friend, then you are committing an infringement. Of course they can’t police it, and many of my colleagues just slap all their papers online. I think you’re only allowed to make a few copies for your own purposes. It seems to me to be absolutely criminal. When I write for these papers, I don’t give them the copyright. I keep it myself. That’s another point of publishing, don’t sign any copyright agreement. That’s my advice. I think it’s now become such a giant operation. I think it is impossible to try to get control over it back again. …….. Recently there has been an open access movement and it’s beginning to change. I think that even NatureScience and Cell are going to have to begin to bow. I mean in America we’ve got old George Bush who made an executive order that everybody in America is entitled to read anything printed with federal funds, tax payers’ money, so they have to allow access to this. But they don’t allow you access to the published paper. They allow you I think what looks like a proof, which you can then display.

Elizabeth Dzeng is a PhD candidate conducting research at the intersection of medical sociology, clinical medicine and medical ethics at the University of Cambridge. She is also a practising doctor and a fellow in General Internal Medicine at the Johns Hopkins School of Medicine.

Science is losing its ability to self-correct

October 20, 2013

With the explosion in the number of researchers, the increasing rush to publication and the corresponding explosion in traditional and on-line journals as avenues of publication, The Economist carries an interesting article making the point that the assumption that science is self-correcting is under extreme pressure. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.”

The field of psychology and especially social psychology has been much in the news with the dangers of “priming”.

“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.

Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.

It is not just “soft” fields which have problems. It is apparent that in medicine a large number of published results cannot be replicated

… irreproducibility is much more widespread. A few years ago scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. According to a piece they wrote last year in Nature, a leading scientific journal, they were able to reproduce the original results in just six. Months earlier Florian Prinz and his colleagues at Bayer HealthCare, a German pharmaceutical giant, reported in Nature Reviews Drug Discovery, a sister journal, that they had successfully reproduced the published results in just a quarter of 67 seminal studies.

The governments of the OECD, a club of mostly rich countries, spent $59 billion on biomedical research in 2012, nearly double the figure in 2000. One of the justifications for this is that basic-science results provided by governments form the basis for private drug-development work. If companies cannot rely on academic research, that reasoning breaks down. When an official at America’s National Institutes of Health (NIH) reckons, despairingly, that researchers would find it hard to reproduce at least three-quarters of all published biomedical findings, the public part of the process seems to have failed.

It is not just that research results cannot be replicated. So much of what is published is just plain wrong and the belief that science is self-correcting is itself under pressure

Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think. …… Statistical mistakes are widespread. The peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate. Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.” 

…… In 2005 John Ioannidis, an epidemiologist from Stanford University, caused a stir with a paper showing why, as a matter of statistical logic, the idea that only one such paper in 20 gives a false-positive result was hugely optimistic. Instead, he argued, “most published research findings are probably false.” 

The tendency to only publish positive results leads also to statistics being skewed to allow results to be shown as being poitive

The negative results are much more trustworthy; …….. But researchers and the journals in which they publish are not very interested in negative results. They prefer to accentuate the positive, and thus the error-prone. Negative results account for just 10-30% of published scientific literature, depending on the discipline. This bias may be growing. A study of 4,600 papers from across the sciences conducted by Daniele Fanelli of the University of Edinburgh found that the proportion of negative results dropped from 30% to 14% between 1990 and 2007. Lesley Yellowlees, president of Britain’s Royal Society of Chemistry, has published more than 100 papers. She remembers only one that reported a negative result.

…. Other data-heavy disciplines face similar challenges. Models which can be “tuned” in many different ways give researchers more scope to perceive a pattern where none exists. According to some estimates, three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”

The idea of peer-review being some kind of a quality check of the results being published is grossly optimistic

The idea that there are a lot of uncorrected flaws in published studies may seem hard to square with the fact that almost all of them will have been through peer-review. This sort of scrutiny by disinterested experts—acting out of a sense of professional obligation, rather than for pay—is often said to make the scientific literature particularly reliable. In practice it is poor at detecting many types of error.

John Bohannon, a biologist at Harvard, recently submitted a pseudonymous paper on the effects of a chemical derived from lichen on cancer cells to 304 journals describing themselves as using peer review. An unusual move; but it was an unusual paper, concocted wholesale and stuffed with clangers in study design, analysis and interpretation of results. Receiving this dog’s dinner from a fictitious researcher at a made up university, 157 of the journals accepted it for publication. ….

……. As well as not spotting things they ought to spot, there is a lot that peer reviewers do not even try to check. They do not typically re-analyse the data presented from scratch, contenting themselves with a sense that the authors’ analysis is properly conceived. And they cannot be expected to spot deliberate falsifications if they are carried out with a modicum of subtlety.

Fraud is very likely second to incompetence in generating erroneous results, though it is hard to tell for certain. 

And then there is the issue that all results from Big Science can never be replicated because the cost of the initial work is so high. Medical research or clinical trials are also extremely expensive. Journals have no great interest to publish replications (even when they are negative). And then, to compound the issue, those who provide funding are less likely to extend funding merely for replication or for negative results.

People who pay for science, though, do not seem seized by a desire for improvement in this area. Helga Nowotny, president of the European Research Council, says proposals for replication studies “in all likelihood would be turned down” because of the agency’s focus on pioneering work. James Ulvestad, who heads the division of astronomical sciences at America’s National Science Foundation, says the independent “merit panels” that make grant decisions “tend not to put research that seeks to reproduce previous results at or near the top of their priority lists”. Douglas Kell of Research Councils UK, which oversees Britain’s publicly funded research argues that current procedures do at least tackle the problem of bias towards positive results: “If you do the experiment and find nothing, the grant will nonetheless be judged more highly if you publish.” 

Trouble at the lab 

The rubbish will only decline when there is a cost to publishing shoddy work which outweighs the gains of adding to a researcher’s list of publications. At some point researchers will need to be held liable and accountable for their products (their publications). Not just for fraud or misconduct but even for negligence or gross negligence when they do not carry out their work using the best available practices of the field. These are standards that some (but not all) professionals are held to and there should be no academic researcher who is not also subject to such a standard. If peer-review is to recover some of its lost credibility then anonymous reviews must disappear and reviewers must be much more explicit about what they have checked and what they have not.

Mathematical genius?

June 4, 2013

Retraction Watch reports on the retraction of a paper by a Turkish mathematician for plagiarism. The author did not agree with the retraction.

But what struck me was the track record of this amazing Assistant Professor at Ege University.

Ahmet Yildirim Assistant Professor, Ege University, Turkey

Editorial Board Member of International Journal of Theoretical and Mathematical Physics

  • 2009       Ph.D      Applied Mathematics, Ege University (Turkey)
  • 2005       M.Sc      Applied Mathematics, Ege University (Turkey)
  • 2002       B.Sc        Mathematics, Ege University (Turkey)

Since 2007 he has a list of 279 publications!

That’s an impressive rate of about 50 publications per year. Prolific would be an understatement.

All peer reviewed no doubt.

Peer review for funding is different to that for publication

May 8, 2013

I note that battle lines are being drawn in the US between the parties concerning peer review and the NSF. The Republicans are questioning a number of NSF grants and demanding some justification of the review process for funding awards.  The Democrats are taking this as an heretical attack on SCIENCE. But I also note that one important distinction is not being drawn.

Choosing projects for funding from the public purse is fundamentally a political process and requires justification in simple terms to the providers of that funding (the taxpayer). While peer review – for all its faults – may be used to select projects the reviewers cannot escape the responsibility to justify their selections to the funders (and not just to the funding organisation – NSF – set up to channel the funds). Of course the NSF would prefer that they have complete freedom in disbursing the funds allocated to them in any way they please – but that won’t wash. The acceptance of public funds demands public accountability.

Peer review for publication is a very different thing. This should be in – engineering terms – a “Quality gate”. It should be a check of the quality of the work done and its independence. But here reviewers also carry  a “fiduciary” responsibility which is not always met. The reviewers carry an obligation of trust and ethical propriety not only to the journals they serve but also to the readers and subscribers of that journal. Where funding is involved this “fiduciary” responsibility extends to the providers of the funds. Unlike reviewers for funding selection who – I think – must be able to justify their choices to a wider audience than the “in-crowd”, the publication reviewer does not need to provide explanations for his opinions. But his opinions cannot be secret opinions – and that requires that such reviewers not be anonymous and that their opinions be available. Journal editors have the final responsibility for what is published or not. But reviewers should not escape being held responsible and accountable for their share of such decisions. They cannot escape from ownership and consequences of their own opinions and judgements on which decisions to publish or reject may be based.

Financial auditors cannot escape their fiduciary responsibilities (though they often escape accountability). Can the scientific community continue to take – or appear to take –  less responsibility than the financial community? Accountability is quite another matter.

ScienceInsider: 

The new chair of the House of Representatives science committee has drafted a bill that, in effect, would replace peer review at the National Science Foundation (NSF) with a set of funding criteria chosen by Congress. For good measure, it would also set in motion a process to determine whether the same criteria should be adopted by every other federal science agency.

The legislation, being worked up by Representative Lamar Smith (R-TX), represents the latest—and bluntest—attack on NSF by congressional Republicans seeking to halt what they believe is frivolous and wasteful research being funded in the social sciences. Last month, Senator Tom Coburn (R-OK) successfully attached language to a 2013 spending bill that prohibits NSF from funding any political science research for the rest of the fiscal year unless its director certifies that it pertains to economic development or national security. Smith’s draft bill, called the “High Quality Research Act,” would apply similar language to NSF’s entire research portfolio across all the disciplines that it supports.

Nature: 

In a brief 15-minute speech today, US President Barack Obama championed independence for the peer-review process, in front of an audience of elite researchers at the 150th annual meeting of the National Academy of Sciences in Washington DC.

“In order for us to maintain our edge, we’ve got to protect our rigorous peer review system,” Obama said. His support comes on the heels of draft legislation, dated 18 April, that ScienceInsider reports is being discussed by the chairman of the US House of Representatives Science Committee, Lamar Smith (Republican, Texas). That legislation would overhaul peer review of grants submitted to the National Science Foundation (NSF) and require the NSF director to certify each funded project as benefitting the economic or public health of the United States.

Another warming hockey stick is withdrawn/”put-on-hold” for bad data

June 9, 2012

One would think that after Climategate, climate scientists would be a little more careful with their “trickery”.

When a supposedly peer reviewed paper in the American Meteorological Society Journal  is withdrawn / “put on hold” after publication when the on-line community (Jean S / Steve McIntyre) find the authors to have cherry picked and improperly “massaged their data, it says 2 things:

  1. that the peer review process at the AMS is either incompetent or corrupt (in that it is especially friendly to papers propounding the global warming orthodoxy), and
  2. that the “tricks” revealed by Climategate are still being actively used by so-called climate scientists  to support their beliefs

That one of the authors – probably responsible for this cock-up – a Joelle Gergis from the University of Melbourne, is more an “activist” than a “scientist” does not help matters . Going through the abstracts of her list of publications suggests that she often decides on her conclusions first and then selects data and writes her papers to fit the conclusions. Cherry picking data is bad enough but when it is done because of confirmation bias it is perhaps the most insidious form of scientific misconduct there is.

Interestingly

joellegergis.wordpress.com is no longer available.

The authors have deleted this blog.

The AMS Journal “peers” who reviewed this paper don’t come out of this very well either. But of course they will receive no strictures for a job done badly.

Sources:

Gergis et al “Put on Hold”

American Meteorological Society disappears withdraws Gergis et al paper on proxy temperature reconstruction after post peer review finds fatal flaws

Gergis paper disappears

Another Hockey Stick broken

Misuse of peer review by UK Research Councils leads to mediocrity

September 14, 2011

The 7 UK Research Councils are publicly-funded agencies responsible for the funding of most research in the UK. They have often been criticised for being much too “establishment” driven such that any line of research considered heretical is strangled of any funding. Donald W. Braben is honorary professor in the department of earth sciences, University College London and known for his support for academic freedom and “blue-skies” research. In an article in The Times Higher Education Supplement,  he comes down hard against the research councils and their use of “peer review”. He argues that they inherently discourage  any “pioneering” research and drive towards mediocrity.

Until about 1970, academic researchers were usually given modest funds to use as they pleased. This apparent profligacy led to a prodigious harvest of unpredicted discoveries and huge stimulants to economic growth. ……. 

It is said that peer review is like democracy: it’s not the best but it’s the best we know. But science is not democratic. One doubtful scientist can be right while 100 convinced colleagues can be wrong. Indeed, the physicist Richard Feynman once defined science as “the belief in the ignorance of experts”. Specifically, peer review of grant applications, or peer “preview”, is inimical to radically new ideas. Today, however, the all-powerful peer-preview bureaucracy is the determinant of excellence. It is taboo even to criticise it. So the natural inclination to oppose major challenges to the status quo has become institutionalised. For radical research, one can argue that “the best we know” has become the worst. 

“Independent expert peer review” is contradictory. One submits a proposal and the councils ask experts to assess it. But these experts are likely to include proposers’ closest competitors, even if they are selected internationally, because science is global – and real pioneers have no peers, of course. How then can the councils ensure that reviews are independent? To make matters worse, these experts can pass judgement anonymously: applicants don’t know who put the boot in.

I suggest that the misuse of peer review is at the heart of the research councils’ problems. Before about 1970, they largely restricted its use to the assessment of applications for large grants or expensive equipment. Scientific leaders protected the seed corn, ensuring that young scientists could launch radical challenges if they were sufficiently inspired, dedicated and determined. Today, the experts whose ignorance they would challenge might also influence their chances of funding. ………

….. The research councils are taking UK research down pathways to mediocrity and using peer review as justification. We – the academic community – must stop them, or accept the dire consequences.

Read the whole article

Political goals distort the science done by the US National Parks Service

September 13, 2011

This is not the first time of course that slanted and pre-determined conclusions to suit a political agenda are drawn from supposedly “rigorously peer-reviewed research”. Peer-review carried out correctly is no doubt very effective but it also always discourages the non-establishment view. And if the establishment has a preconceived “belief”, then any views dissenting from that orthodoxy are easy to suppress.

ABC reports:

There are new allegations of scientific misconduct being directed at the National Park Service. A park service study claims an oyster farm in the Point Reyes National Seashore is harming wildlife, but there are disturbing new questions about the science behind that study. 

The Drakes Bay Oyster Company has been at Point Reyes since the 1930s, but the National Park Service says it must close in 2012 in order to return it back to wilderness. The park service released a study in April claiming to have evidence the oyster farm is a threat to harbor seals, driving them out of their home in Drakes Estero. However, an independent analysis by outside experts shows that evidence is slanted to make the oyster farm look bad.

Addendum (21st September 2011)

It seems (not yet confirmed) that the paper in question is Modeling the effects of El Nino, density-dependence,and disturbance on harbor seal (Phoca vitulina) counts in Drakes Estero, California: 1997–2007 by Becker, Press and Allen,
MARINE MAMMAL SCIENCE, 25(1): 1–18 ( January 2009), Society for Marine Mammalogy, DOI: 10.1111/j.1748-7692.2008.00234.x

I think the problematic paragraph could be this one in the Results section
Disturbance rates in the upper estero (subsites OB, UEF, UEN) significantly
increased with oyster harvest (rs = 0.55, P < 0.03) (Fig. 2B). This correlation
is highly robust to sample size. For example, there was still a significant positive
correlation (rs = 0.53, P < 0.04) of disturbance rate with oyster harvest even
when removing the 2006 disturbance, four of the 2007 disturbances (including two
disturbances on 1 day in 2007 that the mariculture company challenged), and four of
the 1996 disturbances (nine total) from the analysis. Similarly, oyster harvest levels
in years with oyster related disturbances were significantly higher (U = 43, n =
13, P1−tail < 0.04). 

The independent study itself seems to have been done by heavyweights in the world of science led by Corey S Goodman:

“This is a published paper, it’s publicly available, it’s been supported by taxpayer dollars, it’s done by government scientists,” said biologist Corey Goodman, Ph.D. Goodman is a member of the National Academy of Sciences and he has published more than 200 scientific papers. He was asked by a Marin County supervisor in 2007 to look into how the park was conducting scientific research and he’s been pouring over data ever since. ……. 

It took the National Park Service three months to hand over their data to Goodman. When he finally got it, he shared it with statisticians at Stanford and U.C. Davis to see if they could replicate the results. “And what I find is that none of the conclusions in the paper are valid,” said Goodman. ……That’s why Goodman is charging the park service with distorting science to fit their ultimate goal of closing the oyster farm. 

Further details of Dr. Goodman’s charges of scientific misconduct are here.

The author of the Parks Service paper seems to have gone into hiding and the Parks Service is in a defensive mode.

ABC7 wanted to hear from the park service scientist who wrote the study, Dr. Ben Becker, director of the Pacific Coast Science and Learning Center at Point Reyes National Seashore. We asked the park service for an interview, left messages for Becker, and sent emails, but never heard back. We even went to his house to get answers, but Becker refused to answer our questions.

Park service spokesman Melanie Gunn told us in an email that Becker’s paper “went through a rigorous peer review process.”

But merely invoking peer-review -which is notoriously patchy in its quality – and which often ends up as being “pal-review” is unlikely to be enough in this case.

Goodman’s concerns were still enough to raise the interest of Sen. Dianne Feinstein, D-California. The senator has asked the Marine Mammal Commission to do an independent review of the park service study and now she wants the park service to delay its environmental impact statement on the oyster farm until after that review. She sent a letter to Interior Secretary Ken Salazar.

In it the letter, Feinstein says: “I fear that if the Department of Interior does not stand behind the independent analysis, it will be another example of a lack of credibility at Point Reyes National Seashore.”

The park service says it is cooperating with the review but still plans to release its report this month, adding that “Dr. Becker continues to work with the Marine Mammal Commission on any remaining questions the Commission may have.”

Related: Peer review and the corruption of science


%d bloggers like this: