Archive for September, 2011

Misuse of peer review by UK Research Councils leads to mediocrity

September 14, 2011

The 7 UK Research Councils are publicly-funded agencies responsible for the funding of most research in the UK. They have often been criticised for being much too “establishment” driven such that any line of research considered heretical is strangled of any funding. Donald W. Braben is honorary professor in the department of earth sciences, University College London and known for his support for academic freedom and “blue-skies” research. In an article in The Times Higher Education Supplement,  he comes down hard against the research councils and their use of “peer review”. He argues that they inherently discourage  any “pioneering” research and drive towards mediocrity.

Until about 1970, academic researchers were usually given modest funds to use as they pleased. This apparent profligacy led to a prodigious harvest of unpredicted discoveries and huge stimulants to economic growth. ……. 

It is said that peer review is like democracy: it’s not the best but it’s the best we know. But science is not democratic. One doubtful scientist can be right while 100 convinced colleagues can be wrong. Indeed, the physicist Richard Feynman once defined science as “the belief in the ignorance of experts”. Specifically, peer review of grant applications, or peer “preview”, is inimical to radically new ideas. Today, however, the all-powerful peer-preview bureaucracy is the determinant of excellence. It is taboo even to criticise it. So the natural inclination to oppose major challenges to the status quo has become institutionalised. For radical research, one can argue that “the best we know” has become the worst. 

“Independent expert peer review” is contradictory. One submits a proposal and the councils ask experts to assess it. But these experts are likely to include proposers’ closest competitors, even if they are selected internationally, because science is global – and real pioneers have no peers, of course. How then can the councils ensure that reviews are independent? To make matters worse, these experts can pass judgement anonymously: applicants don’t know who put the boot in.

I suggest that the misuse of peer review is at the heart of the research councils’ problems. Before about 1970, they largely restricted its use to the assessment of applications for large grants or expensive equipment. Scientific leaders protected the seed corn, ensuring that young scientists could launch radical challenges if they were sufficiently inspired, dedicated and determined. Today, the experts whose ignorance they would challenge might also influence their chances of funding. ………

….. The research councils are taking UK research down pathways to mediocrity and using peer review as justification. We – the academic community – must stop them, or accept the dire consequences.

Read the whole article

Political goals distort the science done by the US National Parks Service

September 13, 2011

This is not the first time of course that slanted and pre-determined conclusions to suit a political agenda are drawn from supposedly “rigorously peer-reviewed research”. Peer-review carried out correctly is no doubt very effective but it also always discourages the non-establishment view. And if the establishment has a preconceived “belief”, then any views dissenting from that orthodoxy are easy to suppress.

ABC reports:

There are new allegations of scientific misconduct being directed at the National Park Service. A park service study claims an oyster farm in the Point Reyes National Seashore is harming wildlife, but there are disturbing new questions about the science behind that study. 

The Drakes Bay Oyster Company has been at Point Reyes since the 1930s, but the National Park Service says it must close in 2012 in order to return it back to wilderness. The park service released a study in April claiming to have evidence the oyster farm is a threat to harbor seals, driving them out of their home in Drakes Estero. However, an independent analysis by outside experts shows that evidence is slanted to make the oyster farm look bad.

Addendum (21st September 2011)

It seems (not yet confirmed) that the paper in question is Modeling the effects of El Nino, density-dependence,and disturbance on harbor seal (Phoca vitulina) counts in Drakes Estero, California: 1997–2007 by Becker, Press and Allen,
MARINE MAMMAL SCIENCE, 25(1): 1–18 ( January 2009), Society for Marine Mammalogy, DOI: 10.1111/j.1748-7692.2008.00234.x

I think the problematic paragraph could be this one in the Results section
Disturbance rates in the upper estero (subsites OB, UEF, UEN) significantly
increased with oyster harvest (rs = 0.55, P < 0.03) (Fig. 2B). This correlation
is highly robust to sample size. For example, there was still a significant positive
correlation (rs = 0.53, P < 0.04) of disturbance rate with oyster harvest even
when removing the 2006 disturbance, four of the 2007 disturbances (including two
disturbances on 1 day in 2007 that the mariculture company challenged), and four of
the 1996 disturbances (nine total) from the analysis. Similarly, oyster harvest levels
in years with oyster related disturbances were significantly higher (U = 43, n =
13, P1−tail < 0.04). 

The independent study itself seems to have been done by heavyweights in the world of science led by Corey S Goodman:

“This is a published paper, it’s publicly available, it’s been supported by taxpayer dollars, it’s done by government scientists,” said biologist Corey Goodman, Ph.D. Goodman is a member of the National Academy of Sciences and he has published more than 200 scientific papers. He was asked by a Marin County supervisor in 2007 to look into how the park was conducting scientific research and he’s been pouring over data ever since. ……. 

It took the National Park Service three months to hand over their data to Goodman. When he finally got it, he shared it with statisticians at Stanford and U.C. Davis to see if they could replicate the results. “And what I find is that none of the conclusions in the paper are valid,” said Goodman. ……That’s why Goodman is charging the park service with distorting science to fit their ultimate goal of closing the oyster farm. 

Further details of Dr. Goodman’s charges of scientific misconduct are here.

The author of the Parks Service paper seems to have gone into hiding and the Parks Service is in a defensive mode.

ABC7 wanted to hear from the park service scientist who wrote the study, Dr. Ben Becker, director of the Pacific Coast Science and Learning Center at Point Reyes National Seashore. We asked the park service for an interview, left messages for Becker, and sent emails, but never heard back. We even went to his house to get answers, but Becker refused to answer our questions.

Park service spokesman Melanie Gunn told us in an email that Becker’s paper “went through a rigorous peer review process.”

But merely invoking peer-review -which is notoriously patchy in its quality – and which often ends up as being “pal-review” is unlikely to be enough in this case.

Goodman’s concerns were still enough to raise the interest of Sen. Dianne Feinstein, D-California. The senator has asked the Marine Mammal Commission to do an independent review of the park service study and now she wants the park service to delay its environmental impact statement on the oyster farm until after that review. She sent a letter to Interior Secretary Ken Salazar.

In it the letter, Feinstein says: “I fear that if the Department of Interior does not stand behind the independent analysis, it will be another example of a lack of credibility at Point Reyes National Seashore.”

The park service says it is cooperating with the review but still plans to release its report this month, adding that “Dr. Becker continues to work with the Marine Mammal Commission on any remaining questions the Commission may have.”

Related: Peer review and the corruption of science

Hellesö apologises – sort of – and only further antagonises his Flashback nemesis

September 13, 2011

UPDATE! Hellesö says on his website that he has removed 96 pictures – presumably all manipulated. 93 were pictures of lynxes, 2 of badgers and one of a raccoon dog. 

Terje Hellesö, an award winning nature photographer, has been revealed to be  a massive fraud and a cheat. Many of his photographs have been manipulated with stock images of wildlife from the internet having been inserted into landscapes with many “artistic” effects. The skilful detective work in finding his manipulations and the source of the original images  has come from the on-line community of the Swedish Flashback forum. They have also put together a web site where all the manipulations discovered have been posted.  At least some 20 pictures have been manipulated and another 10 or so are suspicious. But Hellesö has removed all his old images from his web-site and it is unclear how many images he may have manipulated  and when he first started his fraudulent career. It seems to go back at least 3 or 4 years and it could be that the start of his manipulations coincided with his use of  digital images or perhaps with his learning how to use Photoshop. A new vocabulary has emerged for the fraudulent manipulation of images and based on his name as an adjective –  a “terjade” picture has now become an accepted  word-form!!!

The Swedish Environmental Protection Agency which had named him as the Nature Photographer of the Year 2010 has now withdrawn their award since their jury have now come to the conclusion that some of his his photographs even before 2010 were probably manipulated. Their press release (in Swedish) is here. However they are not asking him to return the prize money (about 15,000 kronor or $2,500) because they have no regulations about what to do when a prize is withdrawn. But, as some of the Flashback readers have pointed out, there should then be no hindrance in asking for the prize money to be returned precisely because there are no governing regulations.

But now Hellesö has posted a long, rambling, self-serving, self-pitying sort of apology on his website – which seems to be no apology at all but instead a form of damage control and an attempt to take charge of the narrative and to resurrect himself. He does not reveal how many pictures he has manipulated – perhaps thousands – and when he started his nefarious career. He does not offer to recompense the thousands who paid the expensive fees he charged for attending his photography courses. He does not offer to compensate the organisations who paid him dearly for including his images in exhibitions and publications and often lost money in their enterprises. He does not apologise for the heart-rending stories he invented from thin air about the circumstances surrounding his encounters with the imaginary wildlife that he was supposed to have photographed.

His “apology” is much too long, too badly written and much too self-serving to be reproduced or translated in full. But there are some sections which reveal his intentions quite clearly and that his remorse is no more than a micron or two deep.

I ask for pardon because I made a number of photomontages in which I gave you all 
a very different picture than the reality I was trying to convey. This I will never ever repeat in the same way. If in the future I manipulate images, I will reveal exactly how I do so. I never ever again will have a desire to cheat anyone. Or of having to lie to hide the truth. …. I hope with all my heart that you can forgive me, and that maybe I will come to get a second chance from you. …

It begs the question if he is admitting that his previous manipulations were intended to cheat people (as they did).

I will come to speak publicly about this in different places, in ways you will discover later. I will share much of that here in this blog as well. 

Indeed!! A new “show and tell” career perhaps.

I would also say sorry to the Environmental Protection Agency who gave me the award 
“Nature Photographer 2010”. The pressure that you had to endure in this, is not something I would have wished for you. I know you did not know anything about my current lynx project when you gave me this award, and I have read your reasoning 
numerous times. I understand that you now choose to withdraw the award from me, , but I will keep in mind your justification (for the award) for myself so that I can draw some strength for tomorrow. Over time, I hope that my pictures – including here on this blog – will be some form of redress for the choice you originally made.

He promises that his future pictures will be available for scrutiny and for expert comments which will be published on his blog. It seems to be just an attempt to create a way for his fan club to post nice things about him.

He clearly sees his resurrection – phoenix-like – from the ashes of his present career. He has been accused of being a narcissist, an ego-maniac, and much worse. But his “apology” has only served to anatagonise the on-line community even further and they are now mobilised and energised to scrutinise everything he has ever done.

But to me he sounds like Tricky Dicky did in 1974 – and I am old enough to remember his self-serving TV performances! An attempt to control the narrative of his own demise.

“Where no man has gone before” – 50 new exoplanets discovered

September 12, 2011

Fifty New Exoplanets Discovered by HARPS

Astronomers using ESO’s world-leading exoplanet hunter HARPS have today announced a rich haul of more than 50 new exoplanets, including 16 super-Earths, one of which orbits at the edge of the habitable zone of its star. By studying the properties of all the HARPS planets found so far, the team has found that about 40% of stars similar to the Sun have at least one planet lighter than Saturn.

The HARPS spectrograph on the 3.6-metre telescope at ESO’s La Silla Observatory in Chile is the world’s most successful planet finder. The HARPS team, led by Michel Mayor (University of Geneva, Switzerland), today announced the discovery of more than 50 new exoplanets orbiting nearby stars, including sixteen super-Earths. This is the largest number of such planets ever announced at one time. The new findings are being presented at a conference on Extreme Solar Systems where 350 exoplanet experts are meeting in Wyoming, USA.

The harvest of discoveries from HARPS has exceeded all expectations and includes an exceptionally rich population of super-Earths and Neptune-type planets hosted by stars very similar to our Sun. And even better — the new results show that the pace of discovery is accelerating,” says Mayor.

In the eight years since it started surveying stars like the Sun using the radial velocity technique HARPS has been used to discover more than 150 new planets. About two thirds of all the known exoplanets with masses less than that of Neptune were discovered by HARPS. These exceptional results are the fruit of several hundred nights of HARPS observations.

Working with HARPS observations of 376 Sun-like stars, astronomers have now also much improved the estimate of how likely it is that a star like the Sun is host to low-mass planets (as opposed to gaseous giants). They find that about 40% of such stars have at least one planet less massive than Saturn. The majority of exoplanets of Neptune mass or less appear to be in systems with multiple planets.

Read the full European Southern Observatory Press Release

Artists’s impression of one of more than 50 new exoplanets found by HARPS: the rocky super-Earth HD 85512 b: image ESO

The VW Glass factory in Dresden

September 12, 2011

I had the pleasure of visiting VW’s glass factory in Dresden in 2008 when I was living in Görlitz.

A most impressive factory. We had clean suits on and shoe coverings to visit the assembly line on a wood panelled factory floor!!! Since I was driving a rental VW Phaeton at the time I seemed to get special treatment – but perhaps it was just the same fantastic treatment that all visitors got. It was a specially organised visit and we had dinner in the glass atrium after the tour.

The Phaeton is a lovely car but I have to admit that I don’t drive a Phaeton any more and I still prefer my Mercedes as being better value for money.

With thanks to  Frizztext from whose site I got the video.

Dutch social psychologist sacked for faking data over a “prolonged period”

September 12, 2011

On September 7th, Tilburg University officially suspended Diederik Stapel, who heads the Tilburg Institute for Behavioral Economics Research. University Rector Philip Eijlander said that Stapel had admitted to using faked data and said that he would not be allowed to return.

Diederik Stapel

Stapel’s homepage on the Tilburg University website has been removed “by the administrator”.

Mark van Vugt is a Netherlands evolutionary psychologist who currently holds a professorship in psychology at the VU University (Vrije Universiteit) Amsterdam, the Netherlands, and has affiliate positions at the Institute for Cognitive and Evolutionary Anthropology at University of Oxford, UK, and the University of Kent, UK. Mark van Vugt writes about his colleague Diederik Stapel in Psychology Today:

After the high profile case of Marc Hauser, the Harvard psychologist found guilty of serious scientific misconduct there is the recent case of my colleague, Diederik Stapel, a social psychology professor in the Netherlands who has been suspended by his university after admitting to have fabricated experimental data over a prolonged period.

The extent of his fraud is yet unclear but it has produced shock waves among the international social psychology community.

Stapel was the poster boy of Dutch social psychology, having published in the major psychology journals, and receiving various grants and prestigious awards for his research on social cognition and stereotyping. In a recent article published in Science, he and his colleagues showed that in a messy environment (a dirty railway station) White participants were more prejudiced against a Black person. The authenticity of these results is now being investigated…

The Science article that is being investigated is Coping with Chaos: How Disordered Contexts Promote Stereotyping and Discrimination by Diederik A. Stapel and Siegwart Lindenberg, Science 8 April 2011: Vol. 332 no. 6026 pp. 251-253 DOI: 10.1126/science.1201068

But this is not the only article being investigated and there may be a rash of retractions to come.

Science Insider writes:

A Dutch social psychologist whose eye-catching studies about human behavior were fodder for columnists and policy makers has lost his job after his university concluded that some of the data in those studies were fabricated.

Tilburg University today officially suspended Diederik Stapel, who heads the Tilburg Institute for Behavioral Economics Research. But in a TV interview today, university Rector Philip Eijlander said that Stapel had admitted to using faked data and said that he would not be allowed to return.

Stapel has worked at the university, located in southern Netherlands, since 2006. He is known as a prolific researcher and a successful fundraiser. His studies appeared to offer new insights into the workings of the human mind; for instance, a Science paper published in April showed that people are more likely to stereotype or discriminate in messy environments.

In the TV interview, Eijlander says he was first contacted on 27 August by “junior researchers” in Stapel’s lab who alleged that his conduct was fraudulent. Stapel immediately admitted that there was “something strange” in his papers, Eijlander says, and “yesterday, he told me that there are faked data.” The university has asked Willem Levelt, a psycholinguist and former president of the Royal Netherlands Academy of Arts and Sciences, to lead a panel investigating the extent of the alleged fraud. Eijlander says that all “tainted papers” will be retracted.

As to the whistleblowers, Eijlander told the television interviewer that “I have a lot of respect for them, because they found it very difficult.”

Just last week, Stapel made headlines with a press release claiming that thinking of eating meat makes people “more boorish” and less social. The announcement, which said that “meat brings out the worst in people,” raised eyebrows because the study hadn’t yet been written up, let alone published.

Roos Vonk, a psychologist at Radboud University Nijmegen and a collaborator on the study, wrote on her blog today that she believes the latest study is likely among those based on fabricated data. She writes that her conclusion is based on the fact that, although the results had been collected by Stapel’s group, “when we discussed [them], I thought it was odd that Diederik didn’t mention the name of his assistant.” But at the time, she writes, the possibility of fraud didn’t occur to her.

Roos Vonk writes further as she apologises on her blog

I regret very much that this has happened and I will do everything what I can so that trust in the scientific work within social psychology will recover. It is conceivable that this extensive lapse of a few colleagues effects the reputation of our entire profession. I understand that this way can work, but I want to stress that this is a single exception  probably much more shocking and shameful for me and my colleagues than for outsiders, because we all in our education are imbued with the importance of integrity.

An interesting UPDATE from Retraction Watch:

An alert Retraction Watch reader has pointed us to a 1999 paper by Stapel with the impossibly ironic title: Framed and misfortuned: identity salience and the whiff of scandal.”

In the article, which appeared in the European Journal of Social Psychology, Stapel and two colleagues reported the results of survey they’d conducted of Dutch psychologists in the wake of a major plagiarism scandal involving an unidentified Dutch clinical psychologist (“we decided to use neither the name of the person who was accused of plagiarism nor the university to which he was affiliated,” they wrote).

Put briefly, the researchers claimed to have found (rather unsurprisingly) that hows psychologists identified themselves professionally dictated how strongly they were affected personally by the scandal. Money quote:

Whether social psychologists view an article about a plagiarist clinical psychologist as relevant or irrelevant to the self may thus be determined by whether their social identity is narrowly defined (‘social  psychologists’), so as to exclude the plagiarist, or broadly define (‘psychologists’) to include the plagiarist.

Stapel’s group also showed that psychologists from the accused’s own university felt the shame of his alleged misdeeds more than those from other institutions.

And from what Roos Vonk has written it would seem that his collaborators indeed feel a stronger sense of shame than others.

It would seem that much of the research by Diederik Stapel will now be investigated and a number of his papers are likely to be retracted. In addition to the Science paper which is already under investigation some of his other earlier publications are:

I wonder whether cognitive psychology is particularly subject to the faking of data – possibly because faking is relatively easy when the data are so often subjective and so little of it is required to be reproducible or quantitative.

The Heidelberg affidavit: German Universities take action to prevent PhD fraud

September 12, 2011

I have long felt that the work of researchers and scientists cannot and should not be devoid of liability (whether criminal or civil liability) in cases of scientific misconduct or fraud. Recently two University of Toronto law professors argued that medical ghostwriting where medical or pharmaceutical companies finance the writing of favourable, peer-reviewed,  scientific articles should be considered fraud and liable as such.

Now after the retraction of a splurge of PhD’s awarded to German politicians, the academic community is acting to protect the reputation and the value of their PhD’s. Heidelberg University and Bonn University – among others – are tightening their regulations. The NY Times  reports:

The plagiarism scandals that rocked the political world in Germany this year have led to a period of soul-searching among academics and researchers around the country. They have also prompted calls for stricter controls at German universities. …. After several cases in which doctoral theses were described as using unattributed material from earlier works — the most prominent of which pushed Karl-Theodor zu Guttenberg to resign as defense minister — German universities have questioned the way doctoral candidates are tested. Some academics insist that the system is generally sound, pointing out that in the half-dozen high-profile cases where plagiarism was found, the doctoral degree was ultimately retracted.

… the University of Bonn, which in July retracted the doctoral title of Jorgo Chatzimarkakis, a member of the European Parliament, the university will publish extensive and explicit guidelines so that doctoral students know exactly what is expected.

Heidelberg University, which in June formally retracted the doctorate of Silvana Koch-Mehrin, a member of the European Parliament, announced in August that it would begin demanding that doctoral students sign a legally binding affidavit, attesting original authorship. Signing a false statement on such an affidavit can prompt legal action in the local courts, which can lead to a fine and even to a prison sentence of up to three years under the German penal code.

Professor Thomas Pfeiffer, speaking for the university, said the threat of possible legal action, in addition to the embarrassment of a retracted doctorate, would act as a further deterrent.

Faculties at the University of Bonn, Heidelberg University and the University of Bayreuth have all retracted doctorates after internal commissions determined that students-turned-politicians had plagiarized. They are demanding that all doctoral theses be submitted as an electronic copy, to help spot-checking with plagiarism-detection software, a step considered just as important as a deterrent for would-be plagiarists as it is a detection mechanism.

Read the whole article

The Heidelberg affidavit seems a relatively simple and effective way to go. It is pre-emptive and should act as a deterrent without being oppressive. Of course one would wish scientific research to be carried out in an open atmosphere which is not clouded by suspicion. But since the rewards of scientific misconduct – whether as academic or political advancement or in monetary gain – can be very high, suspicion and rivalry will remain unless a system of liability is introduced. This would not only create accountability but would also encourage the taking of responsibility for one’s own work. In fact, if scientists and researchers automatically bear a certain liability for the integrity (not the quality) of their work, then an open atmosphere could actually be promoted.

I see no reason why an extension of the “Heidelberg affidavit” could not be applied to all research workers regarding the integrity of their work and be an integral part of any employment contract.

Chang’e 2 is now “liberated” from earth and lunar gravity

September 11, 2011

China’s lunar probe Chang’e 2 completed its mission orbiting the moon three months ago and has now reached Lagrange (liberation) Point L2.

It has now reached a point in space where neither the moon nor the earth’s gravity will affect the probe. This point is called L2. It’s the farthest a Chinese spacecraft has ever been.

Chang’e 2’s primary mission was to orbit the moon at only 100 kilometers from the surface, taking high resolution photos. After completing this, scientists decided that there was enough fuel to continue with the second part of the mission. But sending the probe from the moon was unprecedented. Similar missions has previously left directly from Earth, so keeping the satellite on course was a technological challenge.

Zhou Jianliang, Deputy Chief Designer, Measure & Control System of Chang’e 2, said, “The satellite faced various disruptions on its journey, which could have led it off course. We had planned four readjustments to keep it on track. But we only need(ed) to do it once since the first adjustment proved so accurate.”

China’s ambitious three-stage moon mission is steadily advancing. The next phase will be the launch of Chang’e-3 in 2013. The probe’s mission is to land on the moon together with a moon rover. In the third phase, the rover should land on the moon and return to Earth with lunar soil and stones for scientists to study. The Chang’e program was named after the legendary Chinese goddess who flew to the moon. With the progress in technology and experience from the Chang’e mission, sending a Chinese astronaut to the moon is now clearly feasible.

On Lagrange Points:

The Italian-French mathematician Joseph-Louis Lagrange discovered five special points in the vicinity of two orbiting masses where a third, smaller mass can orbit at a fixed distance from the larger masses. More precisely, the Lagrange Points mark positions where the gravitational pull of the two large masses precisely equals the centripetal force required to rotate with them. Those with a mathematical flair can follow this link to a derivation of Lagrange’s result (168K PDF file, 8 pages).

Of the five Lagrange points, three are unstable and two are stable. The unstable Lagrange points – labeled L1, L2 and L3 – lie along the line connecting the two large masses. The stable Lagrange points – labeled L4 and L5 – form the apex of two equilateral triangles that have the large masses at their vertices.

Lagrange Points

Lagrange Points of the Earth-Sun system (not drawn to scale!): NASA

 The easiest way to see how Lagrange made his discovery is to adopt a frame of reference that rotates with the system. The forces exerted on a body at rest in this frame can be derived from an effective potential in much the same way that wind speeds can be inferred from a weather map. The forces are strongest when the contours of the effective potential are closest together and weakest when the contours are far apart. In the contour plot below we see that L4 and L5 correspond to hilltops and L1, L2 and L3 correspond to saddles (i.e. points where the potential is curving up in one direction and down in the other).

Effective Potential

A contour plot of the effective potential (not drawn to scale!): NASA

Arab – Iranian feuding continues at Utah University’s Middle East Center

September 11, 2011

H/T to reader Ron.

The mud-slinging and back stabbing at the University of Utah’s Middle East Center is less than edifying and continues unabated. Charges and counter-charges include plagiarism, cronyism, sexual harassment, insubordination and even contributing to a student’s suicide. It begins to seem like a B-grade movie with bad actors and a melodramatic script. An Arab- Iranian feud – with under-currents of Shia-Sunni rivalry – being played out in Utah!! And the roots of the feuding go back some 1500 years to the very rapid Arab conquest of Persia in 644 AD. Ever since there has been a feeling of Persian “shame” at not resisting the takeover very strongly and is the root cause of the Persian disdain for Arab culture and influence which continues today. Just to complicate the picture there is much back-biting and intrigue within the Arabists themselves.

The Salt Lake Tribune now reports that officials at University of California, Los Angeles said on Thursday that

..they can find no record of awarding a degree beyond a master’s to Ibrahim Karawan, who led the Middle East Center until 2008, when he was succeeded by Bahman Bakhtiari. 

That would seem to support allegations by Bakhtiari, recently terminated for plagiarism, that Karawan does not hold a doctorate and never was qualified to be a professor, sign off graduate students’ work and seek federal grants. In a lawsuit filed Sept. 2, Bakhtiari alleges a colleague concealed Karawan’s “academic fraud” for at least two decades and orchestrated Bakhtiari’s firing by inciting graduate students to drum up evidence of plagiarism and then publicize what they found.

Bakhtiari is now using the confusion over Karawan’s academic status in his legal fight with former colleagues whom he blames for his expulsion from his tenured faculty appointment. Bakhtiari, whose name also appears in print as “Baktiari,” claims he is guilty of little more than sloppiness with attribution, while alleging Karawan perpetrated a fraud on the university, its students and the federal government, which awarded grants to the MEC on the basis of Karawan’s doctorate.

“The University’s failure to take any action against a proclaimed professor who did not hold the mandatory credentials and, for nearly 25 years, signed his name to graduate degrees and solicitations for public monies through the United States Department of Education as one holding those credentials in violation not only of university policy but also federal law, while conversely seeking the academic death penalty for me based on minimal allegations, is discriminatory at best,” Bakhtiari wrote in an Aug. 17 e-mail to the Tribune.

Bakhtiari’s suit targets history professor Peter Sluglett, who was the center’s director from 1994 until 2000, when Karawan took the reins, as well as several “John Does.” Sluglett, who left this week for a year in Singapore, had a leadership position on the center’s executive committee and worked closely with Karawan over the years. Administrators’ abrupt dismissal of Sluglett and another scholar from the center is what precipitated Karawan’s resignation as director in 2008, setting the stage for Bakhtiari’s hiring from the University of Maine. Sluglett later was reinstated at the center and resumed a central role in its management.

The principal cast of villains consist of Bakhtiari (of Iranian origin – fired as Director), Karawan (an Arab, a former Director and currently acting Director) and Sluglett ( an Arabist, former Director and now in Singapore for a year).

Cast of villains at the Mid-East Center: Bakhtiari-Kerawan-Sluglett

There is a large supporting cast of actors of students and faculty consisting among others of university interim President, Lorris Betz,  and humanities dean Robert Newman.

But this appears to be a movie where the entire cast are bad-guys and there is no hero in sight!

Significance of differences of significance: Erroneous statistics in neuroscience

September 10, 2011

Experimental work where a difference between tests is observed must also be analysed statistically to show that the difference observed is significant. But when the significance of difference observed in one group of tests is compared to that observed in another group, then the significance of the difference of the differences is often wrongly analysed according to a new paper in Nature Neuroscience.

The authors  analysed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscience, Neuron and The Journal of Neuroscience). Of the 157 papers where this error could have been made 78 used the correct procedure and 79 used the incorrect procedure. Suspecting that the problem could be more widespread they “reviewed an additional 120 cellular and molecular neuroscience articles published in Nature Neuroscience in 2009 and 2010 (the first five Articles in each issue)”. They did not find a single study that used the correct statistical procedure to compare effect sizes. In contrast, they found at least 25 studies that used the erroneous procedure and explicitly or implicitly compared significance levels.

Erroneous analyses of interactions in neuroscience: a problem of significance by Sander Nieuwenhuis, Birte U Forstmann & Eric-Jan Wagenmakers, Nature Neuroscience 14, 1105–1107 (2011) doi:10.1038/nn.2886

(These) statements illustrate a statistical error that is common in the neuroscience literature. The researchers who made these statements wanted to claim that one effect (for example, the training effect on neuronal activity in mutant mice) was larger or smaller than the other effect (the training effect in control mice). To support this claim, they needed to report a statistically significant interaction (between amount of training and type of mice), but instead they reported that one effect was statistically significant, whereas the other effect was not. Although superficially compelling, the latter type of statistical reasoning is erroneous because the difference between significant and not significant need not itself be statistically significant.

Full paper is here: PDF Nieuwenhuis et al 

AbstractIn theory, a comparison of two experimental effects requires a statistical test on their difference. In practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (P < 0.05) but the other is not (P > 0.05). We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscience, Neuron and The Journal of Neuroscience) and found that 78 used the correct procedure and 79 used the incorrect procedure. An additional analysis suggests that incorrect analyses of interactions are even more common in cellular and molecular neuroscience. We discuss scenarios in which the erroneous procedure is particularly beguiling.

The authors conclude

It is interesting that this statistical error occurs so often, even in journals of the highest standard. Space constraints and the need for simplicity may be the reasons why the error occurs in journals such as Nature and Science. Reporting interactions in an analysis of variance design may seem overly complex when one is writing for a general readership. Perhaps, in some cases, researchers choose to report the difference between significance levels because the corresponding interaction effect is not significant. Peer reviewers should help authors avoid such mistakes. … Indeed, people are generally tempted to attribute too much meaning to the difference between significant and not significant. For this reason, the use of confidence intervals may help prevent researchers from making this statistical error. Whatever the reasons for the error, its ubiquity and potential effect suggest that researchers and reviewers should be more aware that the difference between significant and not significant is not itself necessarily significant.