Archive for the ‘Academic misconduct’ Category

Marc Hauser actively manipulated data

May 30, 2014

Marc Hauser – and his supporters – have generally maintained that his misconduct was – at worst – negligence and certainly inadvertent. But the Boston Globe today reports on an internal Harvard report (obtained under FoI) which details wrongdoings rather more deliberate and sinister than Hauser and his friends have ever acknowledged or admitted.

The report is fairly damning.

Boston Globe:

But a copy of an internal Harvard report released to the Globe under the Freedom of Information Act now paints a vivid picture of what actually happened in the Hauser lab and suggests it was not mere negligence that led to the problems. 

The 85-page report details instances in which Hauser changed data so that it would show a desired effect. It shows that he more than once rebuffed or downplayed questions and concerns from people in his laboratory about how a result was obtained. The report also describes “a disturbing pattern of misrepresentation of results and shading of truth” and a “reckless disregard for basic scientific standards.”

A three-member Harvard committee reviewed 40 internal and external hard drives, interviewed 10 people, and examined original video and paper files that led them to conclude that Hauser had manipulated and falsified data.

Their report was sent to the federal Office of Research Integrity in 2010, but it was not released to the Globe by the agency until this week. ……… Much has been redacted from the report, including the identities of those who did the painstaking investigation and those who brought the problems to light.

Hauser, reached by phone Thursday, said he is focused on his work with at-risk youth on Cape Cod and declined to comment on the report.

The manipulation reported dates back at least to 2002 where he reported (presumably manufactured) data on a videotape of monkey responses which did not exist. In 2005 he altered data to make what was statistically insignificant become significant. Also in 2005, he discarded data after it had been found by a subordinate to have been inconsistent (presumably manipulated). Later, he tried to claim his mail ordering the discarding of the data as evidence of his innocence:

“These may not be the words of someone trying to alter data, but they could certainly be the words of someone who had previously altered data: having been confronted with a red highlighted spreadsheet showing previous alterations, it made more sense to proclaim disappointment about ‘errors’ and suggest recoding everything than, for example, sitting down to compare data sets to see how the ‘errors’ occurred,”

In 2007,

 a member of the laboratory wanted to recode an experiment involving rhesus monkey behavior, due to “inconsistencies” in the coding. “I am getting a bit pissed here. There were no inconsistencies!” Hauser responded, explaining how an analysis was done. 

Later that day, the person resigned from the lab. 

Conclusion that Förster manipulated data is “unavoidable”

May 8, 2014

Retraction Watch has now obtained and translated the report of the investigation by the Dutch National Board for Scientific Integrity (LOWI) into the suspicions about Jens Förster’s research. The conclusions are unavoidable that data manipulation must have taken place and could not have been the result of “sloppy science”.

Here are some of the highlights from the document, which we’ve had translated by a Dutch speaker:

“According to the LOWI, the conclusion that manipulation of the research data has taken place is unavoidable […] intervention must have taken place in the presentation of the results of the … experiment”

“The analyses of the expert … did add to [the Complainant’s report] that these linear patterns were specific for the primary analyses in the … article and did not show in subsets of the data, such as gender groups. [The expert stated that] only goal-oriented intervention in the entire dataset could have led this result.”

“There is an “absence of any form of accountability of the raw data files, as these were collected with questionnaires, and [there is an] absence of a convincing account by the Accused of the way in which the data of the experiments in the previous period were collected and processed.”

“[T]he assistants were not aware of the goal and hypotheses of the experiments [and] the LOWI excludes the possibility that the many research assistants, who were involved in the data collection in these experiments, could have created such exceptional statistical relations.”

What is particularly intriguing is the method of statistical investigation that was applied. Suspicions were not only because the data showed a remarkable linearity but that sub-sets of the data did not. The first suggests confirmation bias (cherry picking) but the second brings data manipulation into play. Non-linearity in sub-sets of data cannot just neatly cancel themselves out giving – fortuitously for the hypothesis – a linearity in the complete data set. The investigation methods are of more value than the Förster paper to be retracted.

I have an aversion to “science” based on questionnaires and “primed” subjects. They are almost as bad as the social psychology studies carried out based on Facebook or Twitter responses. They give results which can rarely be replicated. (I have an inherent suspicion of questionnaires due to my own “nasty” habit of “messing around” with my responses to questionnaires – especially when I am bored or if the questionnaire is a marketing or a political study).

Psychology Today:

Of course priming works— it couldn’t not work. But the lack of control over the information contained in social priming experiments guarantees unreliable outcomes for specific examples.  ..  

This gets worse because social priming studies are typically between-subject designs, and (shock!) different people are even more different from each other than the same people at different times! 

Then there’s also the issue of whether the social primes used across replications are, in fact, the same. It is currently impossible to be sure, because there is no strong theory of what the information is for these primes. In more straight forward perceptual priming (see below) if I present the same stimulus twice I know I’ve presented the same stimulus twice. But the meaning of social information depends not only on what the stimulus is but also who’s giving it and their relationship to the person receiving it, not to mention the state that person is in.

… In social priming, therefore, replicating the design and the stimuli doesn’t actually mean you’ve run the same study. The people are different and there’s just no way to make sure they are all experiencing the same social stimulus, the same information

And results from such studies, if they cannot be replicated, and even if they are the honest results of the study, have no applicability to anything wider than that study.

Förster (continued) – Linearity of data had a 1 in 508×10^18 probability of not being manipulated

May 1, 2014

The report from 2012 detailing the suspicions of manufactured data in 3 of Jens Förster’s papers has now become available. förster 2012 report – eng

The Abstract reads:

Here we analyze results from three recent papers (2009, 2011, 2012) by Dr. Jens Förster from the Psychology Department of the University of Amsterdam. These papers report 40 experiments involving a total of 2284 participants (2242 of which were undergraduates). We apply an F test based on descriptive statistics to test for linearity of means across three levels of the experimental design. Results show that in the vast majority of the 42 independent samples so analyzed, means are unusually close to a linear trend. Combined left-tailed probabilities are 0.000000008, 0.0000004, and 0.000000006, for the three papers, respectively. The combined left-tailed p-value of the entire set is p= 1.96 * 10-21, which corresponds to finding such consistent results (or more consistent results) in one out of 508 trillion (508,000,000,000,000,000,000). Such a level of linearity is extremely unlikely to have arisen from standard sampling. We also found overly consistent results across independent replications in two of the papers. As a control group, we analyze the linearity of results in 10 papers by other authors in the same area. These papers differ strongly from those by Dr. Förster in terms of linearity of effects and the effect sizes. We also note that none of the 2284 participants showed any missing data, dropped out during data collection, or expressed awareness of the deceit used in the experiment, which is atypical for psychological experiments. Combined these results cast serious doubt on the nature of the results reported by Dr. Förster and warrant an investigation of the source and nature of the data he presented in these and other papers.

Förster’s primary thesis in the 3 papers under suspicion is that the global versus local models for perception and processing of data which have been studied and applied for vision are also also valid and apply to the other senses.

1. Förster, J. (2009). Relations Between Perceptual and Conceptual Scope: How Global Versus Local Processing Fits a Focus on Similarity Versus Dissimilarity. Journal of Experimental Psychology: General, 138, 88-111.

2. Förster, J. (2011). Local and Global Cross-Modal Influences Between Vision and Hearing, Tasting, Smelling, or Touching. Journal of Experimental Psychology: General, 140, 364-389.

The University of Amsterdam investigation has called for the third paper to be retracted:

3. Förster, J. & Denzler, M. (2012). Sense Creative! The Impact of Global and Local Vision, Hearing, Touching, Tasting and Smelling on Creative and Analytic Thought.  Social Psychological and Personality Science, 3, 108-117 (The full paper is here: Social Psychological and Personality Science-2012-Förster-108-17 )

Abstract: Holistic (global) versus elemental (local) perception reflects a prominent distinction in psychology; however, so far it has almost entirely been examined in the domain of vision. Current work suggests that global/local processing styles operate across sensory modalities. .As for vision, it is assumed that global processing broadens mental categories in memory, enhancing creativity. Furthermore, local processing should support performance in analytic tasks. Throughout separate 12 studies, participants were asked to look at, listen to, touch, taste or smell details of objects, or to perceive them as wholes. Global processing increased category breadth and creative relative to analytic performance, whereas for local processing the opposite was true. Results suggest that the way we taste, smell, touch, listen to, or look at events affects complex cognition, reflecting procedural embodiment effects. 

My assumption is that if the data have been manipulated it is probably a case of “confirmation bias”.  Global versus local perception is not that easy to define or study for the senses other than vision – which is probably why they have not been studied. Therefore the data may have been “manufactured” to conform with the hypothesis that “the way we taste, smell, touch, listen to, or look at events does affect complex cognition and global processing increases category breadth and creativity relative to analytic performance, whereas local processing decreases them”. The hypothesis becomes the result.

Distinctions between global and local perceptions of hearing are not improbable. But for taste? and smell and touch?? My perception of the field of social psychology (which is still a long way from being a science) is that far too often improbable hypotheses are dreamed up for the effect they have (not least in the media). Data – nearly always by sampling groups of individuals – are then found/manipulated/created to “prove” the hypotheses rather than to disprove them.

My perceptions are not altered when I see results from paper 3 like these:

Our findings may have implications for our daily behaviors. Some objects or people in the real world may unconsciously affect our cognition by triggering global or local processing styles; while some may naturally guide our attention to salient details (e.g., a spot on a jacket, a strong scent of coriander in a soup), others may motivate us to focus on the gestalt (e.g., because they are balanced and no special features stand out). It might be the case then that differences in the composition of dishes, aromas, and other mundane events influence our behavior.We might for example attend more to the local details of the answers by an interview candidate if he wears a bright pink tie, or we may start to become more creative upon tasting a balanced wine. This is because our attention to details versus gestalts triggers different systems that process information in different ways.

The description of the methods used in the paper give me no sense of any scientific rigour –  especially those regarding smell – and I find the entire “experimental method” quite unconvincing.

Participants were seated in individual booths and were instructed to recognize materials by smelling them. A pretest reported in Förster (2011) led to the choice (of) tangerines, fresh soil, and chocolate, which were rated as easily recognizable and neutral to positive in valence (both when given as a mixture but also when given alone). After each trial, participants were asked to wait 1 minute before smelling the next sample. In Study 10a, in the global condition, participants were presented with three small bowls containing a mixture of all three components; whereas in the local condition, the participants were presented with three small bowls, each containing one of the three different ingredients. In the control condition, they had to smell two bowls of mixes and two bowls with pure ingredients (tangerines and soil) in random order.

A science it is certainly not.

Another case of data manipulation, another Dutch psychology scandal

April 30, 2014

UPDATE!

Jens Förster denies the claims of misconduct and has sent an email defending himself to Retraction Watch.

============================

One would have thought the credentials of social psychology as a science – after Diedrik Staple, Dirk Smeesters and Mark Hauser – could not fall much lower. But data manipulation in social psychology would seem to be a bottomless pit.

Another case of data manipulation by social psychologists has erupted at the University of Amsterdam. This time by Jens Förster professor of social psychology at the University of Amsterdam and his colleague Markus Denzler. 

Retraction Watch: 

The University of Amsterdam has called for the retraction of a 2011 paper by two psychology researchers after a school investigation concluded that the article contained bogus data, the Dutch press are reporting.

The paper, “Sense Creative! The Impact of Global and Local Vision, Hearing, Touching, Tasting and Smelling on Creative and Analytic Thought,” was written by Jens Förster and Markus Denzler  and published in Social Psychological & Personality Science. ….

Professor Jens Förster

Jens Förster is no lightweight apparently. He is supposed to have research interests in the principles of motivation. Throughout my own career the practice of motivation in the workplace has been a special interest and I have read some of his papers. Now I feel let down. I have a theory that one of the primary motivators of social psychologists in academia is a narcissistic urge for media attention. No shortage of ego. And I note that as part of his webpage detailing his academic accomplishments he also feels it necessary to highlight his TV appearances!!!!

Television Appearances (Selection) 

Nachtcafé (SWR), Buten & Binnen (RB), Hermann & Tietjen (NDR), Euroland (SWF), Menschen der Woche (SWF), Die große Show der Naturwunder (ARD), Quarks & Co (WDR), Plasberg persönlich (WDR), Im Palais (RBB), Westart (WDR)

They love being Darlings of the media and the media oblige!

As a commenter on Retraction Watch points out, Förster also doubles as a cabaret artist! Perhaps he sees his academic endeavours also as a form of entertaining the public.

Rolf Degen: I hope that this will not escalate, as this could get ugly for the field of psychology. Jens Förster, a German, is a bigger name than Stapel ever was. He was repeatedly portrayed in the German media, not the least because of his second calling as a singer and a cabaret artist, and he has published an enormous amount of books, studies and review papers, all high quality stuff

This revelation occurs at a bad time for Förster, write the Dutch media. He is supposed to work as “Humboldt professor starting from June 1, and he was awarded five million Euros to do research at a German university the next five years. He is also supposed to cooperate with Jürgen Margraf – who is the President of the “German Society for Psychology” and as such the highest ranking German psychologist.

More stem cell fakery as a quick way to publication and fame?

March 11, 2014

Dr Haruko Obokata shot to fame with her stem cell papers photo BBC

Another young researcher, Dr Haruko Obokata has apparently made sensational claims about her stem cell research, shot to fame as lead author in two papers published in Nature and is now in the dock for dodgy images and irreproducible results (perhaps faked).

WSJ: Her co-author, Teruhiko Wakayama of Yamanashi University in Japan, called Monday for the retraction of the findings, published in late January in a pair of papers in the journal Nature.

The papers drew international attention because they held out a safer, easier and more ethical technique for creating master stem cells. These cells, which can be turned into all other body tissues, promise one day to transform the treatment of various ailments, from heart disease to Alzheimer’s. 

But shortly after the papers appeared, Japan’s Riken Center for Developmental Biology, where the work took place, began to investigate alleged irregularities in images used in the papers. Separately, many labs said they couldn’t replicate the results.

A spokesman for Riken said Tuesday that the institution was considering a retraction and that the article’s authors were discussing what to do.

Dr. Wakayama said he has asked the lead author, Haruko Obokata, to retract the studies. “There is no more credibility when there are such crucial mistakes,” he said in an email to The Wall Street Journal.

Dr. Wakayama said he learned Sunday that an image used in Dr. Obokata’s 2011 doctoral thesis had also been used in the Nature papers. “It’s unlikely that it was a careless mistake since it’s from a different experiment from a different time,” he said.

Like several other researchers, Dr. Wakayama said he hasn’t yet been able to reproduce the results. “There is no value in it if the technique cannot be replicated,” he said. 

But another co-author of the papers, Charles Vacanti, a tissue engineer at Harvard Medical School and Brigham and Women’s Hospital in Boston, defended the work. “Some mistakes were made, but they don’t affect the conclusions,” he said in an interview Monday. “Based on the information I have, I see no reason why these papers should be retracted.”

Dr. Vacanti—whose early work some 15 years ago spurred the novel experiments—said he was surprised to hear that one of his co-authors asked for the retraction.

Dr. Vacanti said he had spoken to Dr. Obokata on Monday and that she also stood by the research. “It would be very sad to have such an important paper retracted as a result of peer pressure, when indeed the data and conclusions are honest and valid,” said Dr. Vacanti. …..

The papers created a stir because they reported a process by which mouse cells could be returned to an embryonic-like state simply by dipping them in a mild acid solution, creating what they called STAP cells, for stimulus-triggered acquisition of pluripotency. ….

There seems to be a hint of some “academic rivalry” here as well.

Retraction Watch has more:

Nature told the WSJ that it was still investigating the matter. As Nature‘s news section reported last month, lead author

…biologist Haruko Obokata, who is based at the institution…shot to fame as the lead author of two papers12 published in Nature last month that demonstrated a way to reprogram mature mouse cells into an embryonic state by simply applying stress, such as exposure to acid conditions or physical pressure on cell membranes.

But the studies, published online on January 29, soon came under fire. Paul Knoepfler has had a number of detailed posts on the matter, as hasPubPeer.

Stem cell research seems to have more than its fair share of dodgy papers – presumably because sensational results are easier to come by and very much easier to get published.

Idiot paper of the day: “Math Anxiety and Exposure to Statistics in Messages About Genetically Modified Foods”

February 28, 2014

Roxanne L. Parrott is the Distinguished Professor of Communication Arts and Sciences at Penn State. Reading about this paper is not going to get me to read the whole paper anytime soon. The study the paper is based on – to my mind – is to the discredit of both PennState and the state of being “Distinguished”.

I am not sure what it is but it is not Science.

Kami J. Silk, Roxanne L. Parrott. Math Anxiety and Exposure to Statistics in Messages About Genetically Modified Foods: Effects of Numeracy, Math Self-Efficacy, and Form of PresentationJournal of Health Communication, 2014; 1 DOI: 10.1080/10810730.2013.837549

From the Abstract:

… To advance theoretical and applied understanding regarding health message processing, the authors consider the role of math anxiety, including the effects of math self-efficacy, numeracy, and form of presenting statistics on math anxiety, and the potential effects for comprehension, yielding, and behavioral intentions. The authors also examine math anxiety in a health risk context through an evaluation of the effects of exposure to a message about genetically modified foods on levels of math anxiety. Participants (N = 323) were randomly assigned to read a message that varied the presentation of statistical evidence about potential risks associated with genetically modified foods. Findings reveal that exposure increased levels of math anxiety, with increases in math anxiety limiting yielding. Moreover, math anxiety impaired comprehension but was mediated by perceivers’ math confidence and skills. Last, math anxiety facilitated behavioral intentions. Participants who received a text-based message with percentages were more likely to yield than participants who received either a bar graph with percentages or a combined form. … 

PennState has put out a Press Release:

The researchers, who reported their findings in the online issue of the Journal of Health Communication, recruited 323 university students for the study. The participants were randomly assigned a message that was altered to contain one of three different ways of presenting the statistics: a text with percentages, bar graph and both text and graphs. The statistics were related to three different messages on genetically modified foods, including the results of an animal study, a Brazil nut study and a food recall announcement.

Wow! The effort involved in getting all of 323 students to participate boggles. And taking Math Anxiety as a critical behavioural factor stretches the bounds of rational thought. Could they find nothing better to do? This study is at the edges of academic misconduct.

“This is the first study that we know of to take math anxiety to a health and risk setting,” said Parrott.

It ought also to be the last such idiot study – but I have no great hopes.

Academic backstabbing, misconduct, conspiracy and much, much more at Purdue University

February 7, 2014

The Purdue University School of Nuclear Engineering is the unlikely location for a tangled, sordid tale which I cannot make much sense of.

Professor Rusi Taleyarkhan is either a somewhat naive victim of a nasty conspiracy or he is guilty of academic misconduct and has received his just deserts. But his primary anatgonist was Professor Lefteri Tsoukalas once the head of the School of Nuclear Engineering but who was forced to resign as head by Purdue. Purdue removed Taleyarkhan’s endowed professorship, reduced his salary, and limited his duties with students. There is a murky connection between a journalist Eugenie Reich and Tsoukalas while Reich was promoting her book about scientific misconduct and there was some form of cooperation between them and a number of others to accuse Taleyarkhan.

Once I got this far I gave up.

There is quite obviously a great deal of muck in the Purdue University School of Nuclear Engineering. The University is probably vacillating between support for the warring academics. The role of  the journalist is what adds to the possibility of a nasty conspiracy.

It seems too tawdry to waste much time on though, of course, some careers are being destroyed and someone is – or both are – indulging in defamation.

The New Energy Times has a whole series of articles on the subject. They seem to feel that Talayarkhan has been badly wronged. This article in TwoCircles also takes that position. Tsoukalas puts his position in a letter provided to the New York Times (in 2007).

Oh what a tangled web they weave. It’s all about low-energy, table-top fusion — so it is all probably a hurricane in a thimble. I cannot help observing that cold fusion and claims of misconduct generally seem to go hand-in-hand!

Is PwC plagiarising Andreff’s Sochi Olympic result predictions?

February 7, 2014

In November last year I posted about this paper which used economic factors to develop a model for Olympics medal results and then used the model to predict medals won at the Sochi Winter Olympics starting today. Today Price Waterhouse Coopers (PwC) have with great fanfare made their predictions for the winter Olympics. In their press release they make no mention of this earlier paper

W AndreffEconomic development as major determinant of Olympic medal wins: predicting performances of Russian and Chinese teams at Sochi Games, in Int. J. Economic Policy in Emerging Economies, 2013, 6, 314-340.

The PwC predictions are slightly different but remarkably similar to the results published by Andreff. They claim to have looked at the same factors as Andreff did. They make the same prediction of home advantage for Russia as Andreff did. I don’t have access to their full report but their press release makes absolutely no reference to the earlier paper and seeks to take credit for the analysis. If their report makes no acknowledgement of the work by Andreff then it does look very much like plagiarism by PwC. Even if their “econometric” model has been developed independently, it is still a plagiarism of ideas if an acknowledgement of Andreff’s analysis has not been made.

Andreff Result Predictions:

Medal predictions Sochi 2014 - M Andreff

Medal predictions Sochi 2014 – M Andreff

PWC Medal Predictions

PWC sochi predictions

PWC sochi predictions

Press Release via ConsultantNews:

London, 31 Jan 2014As with the Summer Olympics, home advantage could play a key part in how the Winter Olympics medals are shared out next month – with hosts Russia looking set to capture a record haul.

But the hosts – along with close rivals Germany, Canada, Austria and Norway – will have their work cut out to catch the US team. Further down the table, after their London 2012 Olympics success, the GB team may have to settle for just a couple of medals. And unfortunately the cool Jamaican bobsled team don’t even make it into the running. 

Once again, economists at PwC have used their skills to project the likely medal tally – this time for the Olympic Winter Games at Sochi starting on 7 February. Their analysis is based on econometric modelling, testing the historic correlation between a range of socio-economic metrics and historic medal success.

The modelling results show that the size of the economy is significant in determining success, with total GDP appearing as a significant variable. However, a large economy is not sufficient on its own for a strong performance. Climate is an important factor, with snow coverage and the number of ski resorts per head having a significant and positive impact on medal shares.

Larger, developed countries with the right climate dominate the top of the projected medals table; but Austria and Norway demonstrate that a smaller economy is not a barrier to success, with a greater estimated medal haul than countries such as China and France.

William Zimmern, PwC economist, said: “While this is a light-hearted analysis, it makes an important point of how organisations can use economic techniques to help make better business decisions. The purpose of our model is not to forecast medal totals with complete accuracy, but rather to increase the predictive power of medal projections over and above using historic medal results alone.

The model allows us to make better, more confident and more informed forecasts. Businesses can use similar techniques to do the same.”

Home advantage – PwC

We used regression analysis to produce the results in Table 1, employing a Tobit model to estimate medal share for the 28 countries which have won at least one medal in the last three Winter Olympics. The variables used were total GDP, ski resorts per head, level of snow coverage, medal shares in the previous two Winter Olympics, and dummies for countries with a “tradition” of winter sports and for host countries.

I have worked with PwC many times during my career. They are very effective but they are not slow in trying to take credit wherever they can – even if it is undeserved. And their ethics are generally as lacking as is endemic in their industry (audit/consultancy).  A little bit of plagiarism by PwC – and not for the first time – would not be a great surprise.

Give all your students an A+ and lose your job

February 4, 2014
Denis Rancourt

Denis Rancourt

Denis Rancourt is a recognised researcher and he may even be an excellent physics teacher for all I know. But he either suffers from wanting to be a “martyr” for the cause of “no competition in education” or is just plain lazy and couldn’t be bothered to go to the trouble of grading his students.

I am inclined to believe it is the latter together with a desire for the limelight.

Ottawa citizen:

Denis Rancourt has lost his bid to reclaim his job as a professor at the University of Ottawa. ….  arbitrator Claude Foisy concluded he had no reason to intervene in the university’s 2009 decision to fire Rancourt, then a tenured physics professor, for defying its orders to grade his students objectively.

The university dismissed Rancourt after he awarded A+ marks to all 23 students who completed an advanced physics course he taught in the winter of 2008. The university had earlier issued him a letter of reprimand for awarding A+ marks to virtually everyone in a first-year physics course he gave in 2007. As well, Rancourt’s dean, the late André Lalonde, gave him “clear and unequivocal direction” in March 2008 that he must not grant every student in his courses a grade of A+ based only on their attendance at class.

Rancourt, ……  testified that he’d come to believe that traditional methods of teaching and evaluating physics students were ineffective. Learning physics, he said, must be anchored in self-motivation. Key to his “student-centred” pedagogical method, he testified, was allowing students to learn free of the stress produced by the traditional method of grading and ranking.

…… Foisy noted that the university had opposed Rancourt’s evaluation method “every step of the way.” Lalonde testified that it would undermine the university’s reputation, causing it to be seen as a “Mickey Mouse university,” and was prejudicial to students graded in the traditional manner.

Foisy agreed, even accepting Lalonde’s characterization of Rancourt’s failure to objectively evaluate his students as “a form of academic fraud. This, in my opinion, is a very serious breach of his obligations as a university professor.”

Even though Rancourt was “well aware” of the university’s opposition to his method of evaluation, “he continued to defy the administration,” Foisy said. “Is the dismissal the appropriate remedy in the circumstances? The short answer is yes.”

The arbitrator rejected arguments that Rancourt’s teaching methods were protected by academic freedom, freedom of expression and his tenure status.

“Academic freedom is not so wide as to shield a professor from actions or behaviour that cannot be construed as a reasonable exercise of his responsibilities,” he said.

Foisy did uphold one of Rancourt’s grievances, ordering the university to remove any mention of a disciplinary letter dated Nov. 20, 2007, from Rancourt’s record. The letter blamed Rancourt for not having taught the content of an approved course.

But he rejected Rancourt’s demands for a written apology, monetary reparations and help in partially recovering lost time and career advancement.

joanne st lewis

joanne st lewis

Rancourt is not a stranger to controversy.  He claims that he was fired for political reasons rather than for his teaching methods, but that does not seem to have cut much ice with the arbitrator. His own blog is here and there does seem to be a touch of paranoia. In fact reading his blog suggests he is more interested in publicity through controversy than actually in teaching. He is also being sued for libel by Joanne St. Lewis a Law Professor at the University of Ottawa, after Rancourt referred to her as university president Allan Rock’s “House Negro” on his blog.

From my own experience – though not in a classroom – it was always very difficult to get my managers to objectively grade the performance of their subordinates. Poor performance ratings in a department reflected also on the manager’s own competence. And in my experience it was the goodness of the manager which determined whether he got the balance right in the application of stress to improve the performance of his subordinates or if he went over the top and caused a burn-out. But I have no doubt that performance ratings – done right – improved performance in the work place.

And I am quite sure that much of the poor performance of students reflects on the teacher’s ability to teach. An element of competition and stress is – in my experience – necessary for learning.

Which leaves me without much sympathy for Denis Rancourt.

Pfizer writes off $725 million – were they a victim of scientific fraud?

January 20, 2014

Whether it is just error or bad judgement or fraud we will never know. Perhaps all three.

NewsObserver: In 2008, Pfizer paid $725 million for the rights to a Russian cold medicine called Dimebon. The pharmaceutical giant thought the drug could help ameliorate the symptoms of Alzheimer’s. Several clinical trials showed the medicine had no more impact than a placebo. Pfizer has largely abandoned the project.

Earlier this week came the news that Pfizer have now written off the entire $725 million.

The last flickering hope that Medivation’s Dimebon could help Alzheimer’s disease patients has just been extinguished. The biotech announced this morning that a 12-month study of the drug failed to register significant improvements for patients, mirroring two shorter Phase III studies in which Dimebon failed to outperform a sugar pill. Pfizer took the opportunity to bow out of its partnership, writing off its $225 million upfront and $500 million milestone program for what proved to be another embarrassing pipeline failure.

In 2008, Dimebon looked like an odds-on success, with positive data from a Russian study and 10 years of sales experience to underscore its safety. But Medivation was shaken to the core when its first late-stage study ended in failure, with an additional pratfall for Huntington’s disease to cap the disaster.

In the end, Dimebon’s failure helped tarnish the reputation of Russian drug studies while raising severe doubts about Medivation.

It is not only Pfizer which has been forced to make costly write-offs. Not long ago  GlaxoSmithKline was forced to shut down Sirtris Pharmaceuticals which it had acquired in 2008 for $720 million:

Xconomy:

Glaxo paid $720 million to acquire Sirtris in April 2008, to get ahold of technology that generated lots of breathless media coverage as a modern-day fountain of youth. The company sought to make drugs that act on sirtuins, a class of proteins that scientists believe play a role in aging, programmed cell death, and other key cell processes.

Even though the company is closing the Sirtris site, Stubbee says Glaxo remains confident in the drug candidates it got from that acquisition. ….

…. Sirtuins are known to be active when the body is in a calorie-restricted state, which scientists have shown contributes to longer lifespan. The idea at Sirtris was to make small-molecule chemical compounds that activated sirtuins as a way of fighting diseases that develop as people age—including Type 2 diabetes and cancer. ….. The research into the biological role of sirtuins, from Sirtris co-founder David Sinclair, has attracted its share of skeptics. Just last week, Sinclair, a researcher at Harvard Medical School, sought to buttress his early work with a new article in Science that says resveratrol and related compounds can activate sirtuins. One critic, quoted by the Boston Globe last week, said the role of one sirtuin called SIRT1 in aging, “is still as clear as mud.”

GSK is putting a brave face on all of this.

But for many medical and biotech researchers, the path to fame and fortune is by starting a start-up with some new compound or technique and by “leveraging the promise”. For pharma and biotechnology start-ups the objective is to be bought up by one of the majors for as exorbitant an amount as can be managed. And it seems that one way to inflate the value is by making preliminary data and trials show very optimistic results. Negative results never see the light of day and the positive aspects are exaggerated and at worst manipulated.

The ten-fold growth of retractions in medicine related fields since 1975 is mainly due to misconduct according to this report in Nature where “fraud or suspected fraud was responsible for 43% of the retractions”.

For Big Pharma this is simply a consequence of having “outsourced” part of their research. They can afford a few failures it could be thought. Of course the final cost is eventually borne by the consumers but some start-ups make a killing along the way.