Posts Tagged ‘Jens Förster’

Conclusion that Förster manipulated data is “unavoidable”

May 8, 2014

Retraction Watch has now obtained and translated the report of the investigation by the Dutch National Board for Scientific Integrity (LOWI) into the suspicions about Jens Förster’s research. The conclusions are unavoidable that data manipulation must have taken place and could not have been the result of “sloppy science”.

Here are some of the highlights from the document, which we’ve had translated by a Dutch speaker:

“According to the LOWI, the conclusion that manipulation of the research data has taken place is unavoidable […] intervention must have taken place in the presentation of the results of the … experiment”

“The analyses of the expert … did add to [the Complainant’s report] that these linear patterns were specific for the primary analyses in the … article and did not show in subsets of the data, such as gender groups. [The expert stated that] only goal-oriented intervention in the entire dataset could have led this result.”

“There is an “absence of any form of accountability of the raw data files, as these were collected with questionnaires, and [there is an] absence of a convincing account by the Accused of the way in which the data of the experiments in the previous period were collected and processed.”

“[T]he assistants were not aware of the goal and hypotheses of the experiments [and] the LOWI excludes the possibility that the many research assistants, who were involved in the data collection in these experiments, could have created such exceptional statistical relations.”

What is particularly intriguing is the method of statistical investigation that was applied. Suspicions were not only because the data showed a remarkable linearity but that sub-sets of the data did not. The first suggests confirmation bias (cherry picking) but the second brings data manipulation into play. Non-linearity in sub-sets of data cannot just neatly cancel themselves out giving – fortuitously for the hypothesis – a linearity in the complete data set. The investigation methods are of more value than the Förster paper to be retracted.

I have an aversion to “science” based on questionnaires and “primed” subjects. They are almost as bad as the social psychology studies carried out based on Facebook or Twitter responses. They give results which can rarely be replicated. (I have an inherent suspicion of questionnaires due to my own “nasty” habit of “messing around” with my responses to questionnaires – especially when I am bored or if the questionnaire is a marketing or a political study).

Psychology Today:

Of course priming works— it couldn’t not work. But the lack of control over the information contained in social priming experiments guarantees unreliable outcomes for specific examples.  ..  

This gets worse because social priming studies are typically between-subject designs, and (shock!) different people are even more different from each other than the same people at different times! 

Then there’s also the issue of whether the social primes used across replications are, in fact, the same. It is currently impossible to be sure, because there is no strong theory of what the information is for these primes. In more straight forward perceptual priming (see below) if I present the same stimulus twice I know I’ve presented the same stimulus twice. But the meaning of social information depends not only on what the stimulus is but also who’s giving it and their relationship to the person receiving it, not to mention the state that person is in.

… In social priming, therefore, replicating the design and the stimuli doesn’t actually mean you’ve run the same study. The people are different and there’s just no way to make sure they are all experiencing the same social stimulus, the same information

And results from such studies, if they cannot be replicated, and even if they are the honest results of the study, have no applicability to anything wider than that study.

Förster (continued) – Linearity of data had a 1 in 508×10^18 probability of not being manipulated

May 1, 2014

The report from 2012 detailing the suspicions of manufactured data in 3 of Jens Förster’s papers has now become available. förster 2012 report – eng

The Abstract reads:

Here we analyze results from three recent papers (2009, 2011, 2012) by Dr. Jens Förster from the Psychology Department of the University of Amsterdam. These papers report 40 experiments involving a total of 2284 participants (2242 of which were undergraduates). We apply an F test based on descriptive statistics to test for linearity of means across three levels of the experimental design. Results show that in the vast majority of the 42 independent samples so analyzed, means are unusually close to a linear trend. Combined left-tailed probabilities are 0.000000008, 0.0000004, and 0.000000006, for the three papers, respectively. The combined left-tailed p-value of the entire set is p= 1.96 * 10-21, which corresponds to finding such consistent results (or more consistent results) in one out of 508 trillion (508,000,000,000,000,000,000). Such a level of linearity is extremely unlikely to have arisen from standard sampling. We also found overly consistent results across independent replications in two of the papers. As a control group, we analyze the linearity of results in 10 papers by other authors in the same area. These papers differ strongly from those by Dr. Förster in terms of linearity of effects and the effect sizes. We also note that none of the 2284 participants showed any missing data, dropped out during data collection, or expressed awareness of the deceit used in the experiment, which is atypical for psychological experiments. Combined these results cast serious doubt on the nature of the results reported by Dr. Förster and warrant an investigation of the source and nature of the data he presented in these and other papers.

Förster’s primary thesis in the 3 papers under suspicion is that the global versus local models for perception and processing of data which have been studied and applied for vision are also also valid and apply to the other senses.

1. Förster, J. (2009). Relations Between Perceptual and Conceptual Scope: How Global Versus Local Processing Fits a Focus on Similarity Versus Dissimilarity. Journal of Experimental Psychology: General, 138, 88-111.

2. Förster, J. (2011). Local and Global Cross-Modal Influences Between Vision and Hearing, Tasting, Smelling, or Touching. Journal of Experimental Psychology: General, 140, 364-389.

The University of Amsterdam investigation has called for the third paper to be retracted:

3. Förster, J. & Denzler, M. (2012). Sense Creative! The Impact of Global and Local Vision, Hearing, Touching, Tasting and Smelling on Creative and Analytic Thought.  Social Psychological and Personality Science, 3, 108-117 (The full paper is here: Social Psychological and Personality Science-2012-Förster-108-17 )

Abstract: Holistic (global) versus elemental (local) perception reflects a prominent distinction in psychology; however, so far it has almost entirely been examined in the domain of vision. Current work suggests that global/local processing styles operate across sensory modalities. .As for vision, it is assumed that global processing broadens mental categories in memory, enhancing creativity. Furthermore, local processing should support performance in analytic tasks. Throughout separate 12 studies, participants were asked to look at, listen to, touch, taste or smell details of objects, or to perceive them as wholes. Global processing increased category breadth and creative relative to analytic performance, whereas for local processing the opposite was true. Results suggest that the way we taste, smell, touch, listen to, or look at events affects complex cognition, reflecting procedural embodiment effects. 

My assumption is that if the data have been manipulated it is probably a case of “confirmation bias”.  Global versus local perception is not that easy to define or study for the senses other than vision – which is probably why they have not been studied. Therefore the data may have been “manufactured” to conform with the hypothesis that “the way we taste, smell, touch, listen to, or look at events does affect complex cognition and global processing increases category breadth and creativity relative to analytic performance, whereas local processing decreases them”. The hypothesis becomes the result.

Distinctions between global and local perceptions of hearing are not improbable. But for taste? and smell and touch?? My perception of the field of social psychology (which is still a long way from being a science) is that far too often improbable hypotheses are dreamed up for the effect they have (not least in the media). Data – nearly always by sampling groups of individuals – are then found/manipulated/created to “prove” the hypotheses rather than to disprove them.

My perceptions are not altered when I see results from paper 3 like these:

Our findings may have implications for our daily behaviors. Some objects or people in the real world may unconsciously affect our cognition by triggering global or local processing styles; while some may naturally guide our attention to salient details (e.g., a spot on a jacket, a strong scent of coriander in a soup), others may motivate us to focus on the gestalt (e.g., because they are balanced and no special features stand out). It might be the case then that differences in the composition of dishes, aromas, and other mundane events influence our behavior.We might for example attend more to the local details of the answers by an interview candidate if he wears a bright pink tie, or we may start to become more creative upon tasting a balanced wine. This is because our attention to details versus gestalts triggers different systems that process information in different ways.

The description of the methods used in the paper give me no sense of any scientific rigour –  especially those regarding smell – and I find the entire “experimental method” quite unconvincing.

Participants were seated in individual booths and were instructed to recognize materials by smelling them. A pretest reported in Förster (2011) led to the choice (of) tangerines, fresh soil, and chocolate, which were rated as easily recognizable and neutral to positive in valence (both when given as a mixture but also when given alone). After each trial, participants were asked to wait 1 minute before smelling the next sample. In Study 10a, in the global condition, participants were presented with three small bowls containing a mixture of all three components; whereas in the local condition, the participants were presented with three small bowls, each containing one of the three different ingredients. In the control condition, they had to smell two bowls of mixes and two bowls with pure ingredients (tangerines and soil) in random order.

A science it is certainly not.

Another case of data manipulation, another Dutch psychology scandal

April 30, 2014

UPDATE!

Jens Förster denies the claims of misconduct and has sent an email defending himself to Retraction Watch.

============================

One would have thought the credentials of social psychology as a science – after Diedrik Staple, Dirk Smeesters and Mark Hauser – could not fall much lower. But data manipulation in social psychology would seem to be a bottomless pit.

Another case of data manipulation by social psychologists has erupted at the University of Amsterdam. This time by Jens Förster professor of social psychology at the University of Amsterdam and his colleague Markus Denzler. 

Retraction Watch: 

The University of Amsterdam has called for the retraction of a 2011 paper by two psychology researchers after a school investigation concluded that the article contained bogus data, the Dutch press are reporting.

The paper, “Sense Creative! The Impact of Global and Local Vision, Hearing, Touching, Tasting and Smelling on Creative and Analytic Thought,” was written by Jens Förster and Markus Denzler  and published in Social Psychological & Personality Science. ….

Professor Jens Förster

Jens Förster is no lightweight apparently. He is supposed to have research interests in the principles of motivation. Throughout my own career the practice of motivation in the workplace has been a special interest and I have read some of his papers. Now I feel let down. I have a theory that one of the primary motivators of social psychologists in academia is a narcissistic urge for media attention. No shortage of ego. And I note that as part of his webpage detailing his academic accomplishments he also feels it necessary to highlight his TV appearances!!!!

Television Appearances (Selection) 

Nachtcafé (SWR), Buten & Binnen (RB), Hermann & Tietjen (NDR), Euroland (SWF), Menschen der Woche (SWF), Die große Show der Naturwunder (ARD), Quarks & Co (WDR), Plasberg persönlich (WDR), Im Palais (RBB), Westart (WDR)

They love being Darlings of the media and the media oblige!

As a commenter on Retraction Watch points out, Förster also doubles as a cabaret artist! Perhaps he sees his academic endeavours also as a form of entertaining the public.

Rolf Degen: I hope that this will not escalate, as this could get ugly for the field of psychology. Jens Förster, a German, is a bigger name than Stapel ever was. He was repeatedly portrayed in the German media, not the least because of his second calling as a singer and a cabaret artist, and he has published an enormous amount of books, studies and review papers, all high quality stuff

This revelation occurs at a bad time for Förster, write the Dutch media. He is supposed to work as “Humboldt professor starting from June 1, and he was awarded five million Euros to do research at a German university the next five years. He is also supposed to cooperate with Jürgen Margraf – who is the President of the “German Society for Psychology” and as such the highest ranking German psychologist.


%d bloggers like this: