Conclusion that Förster manipulated data is “unavoidable”

Retraction Watch has now obtained and translated the report of the investigation by the Dutch National Board for Scientific Integrity (LOWI) into the suspicions about Jens Förster’s research. The conclusions are unavoidable that data manipulation must have taken place and could not have been the result of “sloppy science”.

Here are some of the highlights from the document, which we’ve had translated by a Dutch speaker:

“According to the LOWI, the conclusion that manipulation of the research data has taken place is unavoidable […] intervention must have taken place in the presentation of the results of the … experiment”

“The analyses of the expert … did add to [the Complainant’s report] that these linear patterns were specific for the primary analyses in the … article and did not show in subsets of the data, such as gender groups. [The expert stated that] only goal-oriented intervention in the entire dataset could have led this result.”

“There is an “absence of any form of accountability of the raw data files, as these were collected with questionnaires, and [there is an] absence of a convincing account by the Accused of the way in which the data of the experiments in the previous period were collected and processed.”

“[T]he assistants were not aware of the goal and hypotheses of the experiments [and] the LOWI excludes the possibility that the many research assistants, who were involved in the data collection in these experiments, could have created such exceptional statistical relations.”

What is particularly intriguing is the method of statistical investigation that was applied. Suspicions were not only because the data showed a remarkable linearity but that sub-sets of the data did not. The first suggests confirmation bias (cherry picking) but the second brings data manipulation into play. Non-linearity in sub-sets of data cannot just neatly cancel themselves out giving – fortuitously for the hypothesis – a linearity in the complete data set. The investigation methods are of more value than the Förster paper to be retracted.

I have an aversion to “science” based on questionnaires and “primed” subjects. They are almost as bad as the social psychology studies carried out based on Facebook or Twitter responses. They give results which can rarely be replicated. (I have an inherent suspicion of questionnaires due to my own “nasty” habit of “messing around” with my responses to questionnaires – especially when I am bored or if the questionnaire is a marketing or a political study).

Psychology Today:

Of course priming works— it couldn’t not work. But the lack of control over the information contained in social priming experiments guarantees unreliable outcomes for specific examples.  ..  

This gets worse because social priming studies are typically between-subject designs, and (shock!) different people are even more different from each other than the same people at different times! 

Then there’s also the issue of whether the social primes used across replications are, in fact, the same. It is currently impossible to be sure, because there is no strong theory of what the information is for these primes. In more straight forward perceptual priming (see below) if I present the same stimulus twice I know I’ve presented the same stimulus twice. But the meaning of social information depends not only on what the stimulus is but also who’s giving it and their relationship to the person receiving it, not to mention the state that person is in.

… In social priming, therefore, replicating the design and the stimuli doesn’t actually mean you’ve run the same study. The people are different and there’s just no way to make sure they are all experiencing the same social stimulus, the same information

And results from such studies, if they cannot be replicated, and even if they are the honest results of the study, have no applicability to anything wider than that study.

Tags: , , , ,


%d bloggers like this: