Posts Tagged ‘Social psychology’

Diederik Stapel markets himself (anonymously) on Retraction Watch

October 13, 2014

Diedrick Stapel

In June last year it disturbed me that the New York Times was complicit in helping Diedrik Stapel market his “diary” about his transgressions. There is something very unsatisfactory and distasteful when we allow wrong-doers to cash in on their wrong-doing or their notoriety. I had a similar sense of distaste when I read that the Fontys Academy for Creative Industries offered him a job to teach social psychology – almost as a reward for being a failed, but notorius, social psychologist.

Retraction Watch carried a post about the new job. And Diedrik Stapel was shameless enough to show up in the comments (first anonymously) but finally under his own name when he was exposed by Retraction Watch. The comments were all gratuitously self-serving. Perhaps he was carrying out a social experiment?

But this was noticed also by Professor Janet Stemwedel writing in the Scientific American:

You’re not rehabilitated if you keep deceiving

…… But I think a non-negotiable prerequisite for rehabilitation is demonstrating that you really understand how what you did was wrong. This understanding needs to be more than simply recognizing that what you did was technically against the rules. Rather, you need to grasp the harms that your actions did, the harms that may continue as a result of those actions, the harms that may not be quickly or easily repaired. You need to acknowledge those harms, not minimize them or make excuses for your actions that caused the harms. ….

….. Now, there’s no prima facie reason Diederik Stapel might not be able to make a productive contribution to a discussion about Diederik Stapel. However, Diederik Stapel was posting his comments not as Diederik Stapel but as “Paul”.

I hope it is obvious why posting comments that are supportive of yourself while making it appear that this support is coming from someone else is deceptive. Moreover, the comments seem to suggest that Stapel is not really fully responsible for the frauds he committed.

“Paul” writes:

Help! Let’s not change anything. Science is a flawless institution. Yes. And only the past two days I read about medical scientists who tampered with data to please the firm that sponsored their work and about the start of a new investigation into the work of a psychologist who produced data “too good to be true.” Mistakes abound. On a daily basis. Sure, there is nothing to reform here. Science works just fine. I think it is time for the “Men in Black” to move in to start an outside-invesigation of science and academia. The Stapel case and other, similar cases teach us that scientists themselves are able to clean-up their act.

Later, he writes (sic throughout):

Stapel was punished, he did his community service (as he writes in his latest book), he is not on welfare, he is trying to make money with being a writer, a cab driver, a motivational speaker, but not very successfully, and .. it is totally unclear whether he gets paid for his teaching (no research) an extra-curricular hobby course (2 hours a week, not more, not less) and if he gets paid, how much.

Moreover and more importantly, we do not know WHAT he teaches exactly, we have not seen his syllabus. How can people write things like “this will only inspire kids to not get caught”, without knowing what the guy is teaching his students? Will he reach his students how to become fraudsters? Really? When you have read the two books he wrote after his demise, you cannot be conclude that this is very unlikely? Will he teach his students about all the other fakes and frauds and terrible things that happen in science? Perhaps. Is that bad? Perhaps. I think it is better to postpone our judgment about the CONTENT of all this as long as we do not know WHAT he is actually teaching. That would be a Popper-like, open-minded, rationalistic, democratic, scientific attitude. Suppose a terrible criminal comes up with a great insight, an interesting analysis, a new perspective, an amazing discovery, suppose (think Genet, think Gramsci, think Feyerabend).

Is it smart to look away from potentially interesting information, because the messenger of that information stinks?

Perhaps, God forbid, Stapel is able to teach his students valuable lessons and insights no one else is willing to teach them for a 2-hour-a-week temporary, adjunct position that probably doesn’t pay much and perhaps doesn’t pay at all. The man is a failure, yes, but he is one of the few people out there who admitted to his fraud, who helped the investigation into his fraud (no computer crashes…., no questionnaires that suddenly disappeared, no data files that were “lost while moving office”, see Sanna, Smeesters, and …. Foerster). Nowhere it is written that failures cannot be great teachers. Perhaps he points his students to other frauds, failures, and ridiculous mistakes in psychological science we do not know of yet. That would be cool (and not unlikely).

Is it possible? Is it possible that Stapel has something interesting to say, to teach, to comment on?

To my eye, these comments read as saying that Stapel has paid his debt to society and thus ought not to be subject to heightened scrutiny. They seem to assert that Stapel is reformable. …. …… behind the scenes, the Retraction Watch editors accumulated clues that “Paul” was not an uninvolved party but rather Diederik Stapel portraying himself as an uninvolved party. After they contacted him to let him know that such behavior did not comport with their comment policy, Diederik Stapel posted under his real name:

Hello, my name is Diederik Stapel. I thought that in an internet environment where many people are writing about me (a real person) using nicknames it is okay to also write about me (a real person) using a nickname. ! have learned that apparently that was —in this particular case— a misjudgment. I think did not dare to use my real name (and I still wonder why). I feel that when it concerns person-to-person communication, the “in vivo” format is to be preferred over and above a blog where some people use their real name and some do not. In the future, I will use my real name. I have learned that and I understand that I –for one– am not somebody who can use a nickname where others can. Sincerely, Diederik Stapel.

He portrays this as a misunderstanding about how online communication works — other people are posting without using their real names, so I thought it was OK for me to do the same. However, to my eye it conveys that he also misunderstands how rebuilding trust works. Posting to support the person at the center of the discussion without first acknowledging that you are that person is deceptive. Arguing that that person ought to be granted more trust while dishonestly portraying yourself as someone other than that person is a really bad strategy. When you’re caught doing it, those arguments for more trust are undermined by the fact that they are themselves further instances of the deceptive behavior that broke trust in the first place.

Stapel will surely become a case study for future social psychologists. If he truly wishes rehabilitation he needs to move into a different field. Self-serving, anonymous comments in his own favour will not provide the new trust with his peers and his surroundings that he needs to build up. Just as his diary is “tainted goods”, anything he now does in the field of social psychology starts by being tainted with the onus of proof on him to show that it is not.

Advertisements

Conclusion that Förster manipulated data is “unavoidable”

May 8, 2014

Retraction Watch has now obtained and translated the report of the investigation by the Dutch National Board for Scientific Integrity (LOWI) into the suspicions about Jens Förster’s research. The conclusions are unavoidable that data manipulation must have taken place and could not have been the result of “sloppy science”.

Here are some of the highlights from the document, which we’ve had translated by a Dutch speaker:

“According to the LOWI, the conclusion that manipulation of the research data has taken place is unavoidable […] intervention must have taken place in the presentation of the results of the … experiment”

“The analyses of the expert … did add to [the Complainant’s report] that these linear patterns were specific for the primary analyses in the … article and did not show in subsets of the data, such as gender groups. [The expert stated that] only goal-oriented intervention in the entire dataset could have led this result.”

“There is an “absence of any form of accountability of the raw data files, as these were collected with questionnaires, and [there is an] absence of a convincing account by the Accused of the way in which the data of the experiments in the previous period were collected and processed.”

“[T]he assistants were not aware of the goal and hypotheses of the experiments [and] the LOWI excludes the possibility that the many research assistants, who were involved in the data collection in these experiments, could have created such exceptional statistical relations.”

What is particularly intriguing is the method of statistical investigation that was applied. Suspicions were not only because the data showed a remarkable linearity but that sub-sets of the data did not. The first suggests confirmation bias (cherry picking) but the second brings data manipulation into play. Non-linearity in sub-sets of data cannot just neatly cancel themselves out giving – fortuitously for the hypothesis – a linearity in the complete data set. The investigation methods are of more value than the Förster paper to be retracted.

I have an aversion to “science” based on questionnaires and “primed” subjects. They are almost as bad as the social psychology studies carried out based on Facebook or Twitter responses. They give results which can rarely be replicated. (I have an inherent suspicion of questionnaires due to my own “nasty” habit of “messing around” with my responses to questionnaires – especially when I am bored or if the questionnaire is a marketing or a political study).

Psychology Today:

Of course priming works— it couldn’t not work. But the lack of control over the information contained in social priming experiments guarantees unreliable outcomes for specific examples.  ..  

This gets worse because social priming studies are typically between-subject designs, and (shock!) different people are even more different from each other than the same people at different times! 

Then there’s also the issue of whether the social primes used across replications are, in fact, the same. It is currently impossible to be sure, because there is no strong theory of what the information is for these primes. In more straight forward perceptual priming (see below) if I present the same stimulus twice I know I’ve presented the same stimulus twice. But the meaning of social information depends not only on what the stimulus is but also who’s giving it and their relationship to the person receiving it, not to mention the state that person is in.

… In social priming, therefore, replicating the design and the stimuli doesn’t actually mean you’ve run the same study. The people are different and there’s just no way to make sure they are all experiencing the same social stimulus, the same information

And results from such studies, if they cannot be replicated, and even if they are the honest results of the study, have no applicability to anything wider than that study.

Förster (continued) – Linearity of data had a 1 in 508×10^18 probability of not being manipulated

May 1, 2014

The report from 2012 detailing the suspicions of manufactured data in 3 of Jens Förster’s papers has now become available. förster 2012 report – eng

The Abstract reads:

Here we analyze results from three recent papers (2009, 2011, 2012) by Dr. Jens Förster from the Psychology Department of the University of Amsterdam. These papers report 40 experiments involving a total of 2284 participants (2242 of which were undergraduates). We apply an F test based on descriptive statistics to test for linearity of means across three levels of the experimental design. Results show that in the vast majority of the 42 independent samples so analyzed, means are unusually close to a linear trend. Combined left-tailed probabilities are 0.000000008, 0.0000004, and 0.000000006, for the three papers, respectively. The combined left-tailed p-value of the entire set is p= 1.96 * 10-21, which corresponds to finding such consistent results (or more consistent results) in one out of 508 trillion (508,000,000,000,000,000,000). Such a level of linearity is extremely unlikely to have arisen from standard sampling. We also found overly consistent results across independent replications in two of the papers. As a control group, we analyze the linearity of results in 10 papers by other authors in the same area. These papers differ strongly from those by Dr. Förster in terms of linearity of effects and the effect sizes. We also note that none of the 2284 participants showed any missing data, dropped out during data collection, or expressed awareness of the deceit used in the experiment, which is atypical for psychological experiments. Combined these results cast serious doubt on the nature of the results reported by Dr. Förster and warrant an investigation of the source and nature of the data he presented in these and other papers.

Förster’s primary thesis in the 3 papers under suspicion is that the global versus local models for perception and processing of data which have been studied and applied for vision are also also valid and apply to the other senses.

1. Förster, J. (2009). Relations Between Perceptual and Conceptual Scope: How Global Versus Local Processing Fits a Focus on Similarity Versus Dissimilarity. Journal of Experimental Psychology: General, 138, 88-111.

2. Förster, J. (2011). Local and Global Cross-Modal Influences Between Vision and Hearing, Tasting, Smelling, or Touching. Journal of Experimental Psychology: General, 140, 364-389.

The University of Amsterdam investigation has called for the third paper to be retracted:

3. Förster, J. & Denzler, M. (2012). Sense Creative! The Impact of Global and Local Vision, Hearing, Touching, Tasting and Smelling on Creative and Analytic Thought.  Social Psychological and Personality Science, 3, 108-117 (The full paper is here: Social Psychological and Personality Science-2012-Förster-108-17 )

Abstract: Holistic (global) versus elemental (local) perception reflects a prominent distinction in psychology; however, so far it has almost entirely been examined in the domain of vision. Current work suggests that global/local processing styles operate across sensory modalities. .As for vision, it is assumed that global processing broadens mental categories in memory, enhancing creativity. Furthermore, local processing should support performance in analytic tasks. Throughout separate 12 studies, participants were asked to look at, listen to, touch, taste or smell details of objects, or to perceive them as wholes. Global processing increased category breadth and creative relative to analytic performance, whereas for local processing the opposite was true. Results suggest that the way we taste, smell, touch, listen to, or look at events affects complex cognition, reflecting procedural embodiment effects. 

My assumption is that if the data have been manipulated it is probably a case of “confirmation bias”.  Global versus local perception is not that easy to define or study for the senses other than vision – which is probably why they have not been studied. Therefore the data may have been “manufactured” to conform with the hypothesis that “the way we taste, smell, touch, listen to, or look at events does affect complex cognition and global processing increases category breadth and creativity relative to analytic performance, whereas local processing decreases them”. The hypothesis becomes the result.

Distinctions between global and local perceptions of hearing are not improbable. But for taste? and smell and touch?? My perception of the field of social psychology (which is still a long way from being a science) is that far too often improbable hypotheses are dreamed up for the effect they have (not least in the media). Data – nearly always by sampling groups of individuals – are then found/manipulated/created to “prove” the hypotheses rather than to disprove them.

My perceptions are not altered when I see results from paper 3 like these:

Our findings may have implications for our daily behaviors. Some objects or people in the real world may unconsciously affect our cognition by triggering global or local processing styles; while some may naturally guide our attention to salient details (e.g., a spot on a jacket, a strong scent of coriander in a soup), others may motivate us to focus on the gestalt (e.g., because they are balanced and no special features stand out). It might be the case then that differences in the composition of dishes, aromas, and other mundane events influence our behavior.We might for example attend more to the local details of the answers by an interview candidate if he wears a bright pink tie, or we may start to become more creative upon tasting a balanced wine. This is because our attention to details versus gestalts triggers different systems that process information in different ways.

The description of the methods used in the paper give me no sense of any scientific rigour –  especially those regarding smell – and I find the entire “experimental method” quite unconvincing.

Participants were seated in individual booths and were instructed to recognize materials by smelling them. A pretest reported in Förster (2011) led to the choice (of) tangerines, fresh soil, and chocolate, which were rated as easily recognizable and neutral to positive in valence (both when given as a mixture but also when given alone). After each trial, participants were asked to wait 1 minute before smelling the next sample. In Study 10a, in the global condition, participants were presented with three small bowls containing a mixture of all three components; whereas in the local condition, the participants were presented with three small bowls, each containing one of the three different ingredients. In the control condition, they had to smell two bowls of mixes and two bowls with pure ingredients (tangerines and soil) in random order.

A science it is certainly not.

Another case of data manipulation, another Dutch psychology scandal

April 30, 2014

UPDATE!

Jens Förster denies the claims of misconduct and has sent an email defending himself to Retraction Watch.

============================

One would have thought the credentials of social psychology as a science – after Diedrik Staple, Dirk Smeesters and Mark Hauser – could not fall much lower. But data manipulation in social psychology would seem to be a bottomless pit.

Another case of data manipulation by social psychologists has erupted at the University of Amsterdam. This time by Jens Förster professor of social psychology at the University of Amsterdam and his colleague Markus Denzler. 

Retraction Watch: 

The University of Amsterdam has called for the retraction of a 2011 paper by two psychology researchers after a school investigation concluded that the article contained bogus data, the Dutch press are reporting.

The paper, “Sense Creative! The Impact of Global and Local Vision, Hearing, Touching, Tasting and Smelling on Creative and Analytic Thought,” was written by Jens Förster and Markus Denzler  and published in Social Psychological & Personality Science. ….

Professor Jens Förster

Jens Förster is no lightweight apparently. He is supposed to have research interests in the principles of motivation. Throughout my own career the practice of motivation in the workplace has been a special interest and I have read some of his papers. Now I feel let down. I have a theory that one of the primary motivators of social psychologists in academia is a narcissistic urge for media attention. No shortage of ego. And I note that as part of his webpage detailing his academic accomplishments he also feels it necessary to highlight his TV appearances!!!!

Television Appearances (Selection) 

Nachtcafé (SWR), Buten & Binnen (RB), Hermann & Tietjen (NDR), Euroland (SWF), Menschen der Woche (SWF), Die große Show der Naturwunder (ARD), Quarks & Co (WDR), Plasberg persönlich (WDR), Im Palais (RBB), Westart (WDR)

They love being Darlings of the media and the media oblige!

As a commenter on Retraction Watch points out, Förster also doubles as a cabaret artist! Perhaps he sees his academic endeavours also as a form of entertaining the public.

Rolf Degen: I hope that this will not escalate, as this could get ugly for the field of psychology. Jens Förster, a German, is a bigger name than Stapel ever was. He was repeatedly portrayed in the German media, not the least because of his second calling as a singer and a cabaret artist, and he has published an enormous amount of books, studies and review papers, all high quality stuff

This revelation occurs at a bad time for Förster, write the Dutch media. He is supposed to work as “Humboldt professor starting from June 1, and he was awarded five million Euros to do research at a German university the next five years. He is also supposed to cooperate with Jürgen Margraf – who is the President of the “German Society for Psychology” and as such the highest ranking German psychologist.

Nature: “US behavioural researchers exaggerate findings”

August 27, 2013

The field of behaviour within social psychology has not covered itself with glory in recent times. The cases of Diedrik Stapel and Dirk Smeesters and Marc Hauser are all too recent. But I have the perception that the entire field – globally – has been subject to exaggerations and the actions of narcissists. I had not perceived it as being a particular issue just for the US. But I would not be surprised if the “publish or perish” pressure is stronger in the US than in many other countries.

But a  new study publsished in PNAS has ” found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters. Nonbehavioral studies showed no such “US effect” …”

Fanelli, D. & Ioannidis, J. P. A.,US studies may overestimate effect sizes in softer research,  Proc. Natl Acad. Sci. USA  (2013), doi.org/10.1073/pnas.1302997110

Nature reports

US behavioural researchers have been handed a dubious distinction — they are more likely than their colleagues in other parts of the world to exaggerate findings, according to a study published today.

The research highlights the importance of unconscious biases that might affect research integrity, says Brian Martinson, a social scientist at the HealthPartners Institute for Education and Research in Minneapolis, Minnesota, who was not involved with the study. “The take-home here is that the ‘bad guy/good guy’ narrative — the idea that we only need to worry about the monsters out there who are making up data — is naive,” Martinson says.



The study, published in Proceedings of the National Academy of Sciences, was conducted by John Ioannidis, a physician at Stanford University in California, and Daniele Fanelli, an evolutionary biologist at the University of Edinburgh, UK. The pair examined 82 meta-analyses in genetics and psychiatry that collectively combined results from 1,174 individual studies. The researchers compared meta-analyses of studies based on non-behavioural parameters, such as physiological measurements, to those based on behavioural parameters, such as progression of dementia or depression.



The researchers then determined how well the strength of an observed result or effect reported in a given study agreed with that of the meta-analysis in which the study was included. They found that, worldwide, behavioural studies were more likely than non-behavioural studies to report ‘extreme effects’ — findings that deviated from the overall effects reported by the meta-analyses.
 And US-based behavioural researchers were more likely than behavioural researchers elsewhere to report extreme effects that deviated in favour of their starting hypotheses.



“We might call this a ‘US effect,’” Fanelli says. “Researchers in the United States tend to report, on average, slightly stronger results than researchers based elsewhere.”

This ‘US effect’ did not occur in non-behavioral research, and studies with both behavioural and non-behavioural components exhibited slightly less of the effect than purely behavioural research. Fanelli and Ioannidis interpret this finding to mean that US researchers are more likely to report strong effects, and that this tendency is more likely to show up in behavioural research, because researchers in these fields have more flexibility to make different methodological choices that produce more diverse results.

The study looked at a larger volume of research than has been examined in previous studies on bias in behavioural research, says Brian Nosek, a psychologist at the University of Virginia in Charlottesville. ….. 

Abstract

Many biases affect scientific research, causing a waste of resources, posing a threat to human health, and hampering scientific progress. These problems are hypothesized to be worsened by lack of consensus on theories and methods, by selective publication processes, and by career systems too heavily oriented toward productivity, such as those adopted in the United States (US). Here, we extracted 1,174 primary outcomes appearing in 82 meta-analyses published in health-related biological and behavioral research sampled from the Web of Science categories Genetics & Heredity and Psychiatry and measured how individual results deviated from the overall summary effect size within their respective meta-analysis. We found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters. Nonbehavioral studies showed no such “US effect” and were subject mainly to sampling variance and small-study effects, which were stronger for non-US countries. Although this latter finding could be interpreted as a publication bias against non-US authors, the US effect observed in behavioral research is unlikely to be generated by editorial biases. Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.

Related: Retraction Watch

Made-up science: “Liking” or “disliking” in general is a personality trait!

August 27, 2013

This comes into the category – not of bad science – but what I would call “made-up science” where something fairly trivial and obvious is made sufficiently complicated to be addressed by “scientific method”.

It is apparently called “dispositional attitude” and it has a 16-item scale to measure an individual’s propensity to generally like or dislike any stimulii! This surprising and novel discovery expands attitude theory by demonstrating that an attitude is not simply a function of an object’s properties, but it is also a function of the properties of the individual who evaluates the object,”  So a “liker” likes everything and a “hater” hates everything!

“Dispositional Attitude” seems neither surprising nor so very novel. Not so very different from what has been called the “Observer Effect” in physics or the “actor-observer assymetry” in attribution theory. It is unnecessarily trying to complicate what is little more than a cliche. Beauty – or liking or hating – lies in the eye of  the beholder and if your personality wears rose-coloured glasses – surprise, surprise – everything appears red.

Justin Hepler & Dolores Albarracin, “Attitudes without objects: Evidence for a dispositional attitude, its measurement, and its consequences,”J Pers Soc Psychol. 2013 Jun;104(6):1060-76. doi: 10.1037/a0032282. Epub 2013 Apr 15.

The Annenberg School of Communication at the University of Pennsylvania has come out with this Press Release:

New research has uncovered the reason why some people seem to dislike everything while others seem to like everything. Apparently, it’s all part of our individual personality – a dimension that researchers have coined “dispositional attitude.”
            People with a positive dispositional attitude have a strong tendency to like things, whereas people with a negative dispositional attitude have a strong tendency to dislike things, according to research published in the Journal of Personality and Social Psychology. The journal article, “Attitudes without objects: Evidence for a dispositional attitude, its measurement, and its consequences,” was written by Justin Hepler, University of Illinois at Urbana-Champaign, and Dolores Albarracín, Ph.D., the Martin Fishbein Chair of Communication and Professor of Psychology at Penn.
            “The dispositional attitude construct represents a new perspective in which attitudes are not simply a function of the properties of the stimuli under consideration, but are also a function of the properties of the evaluator,” wrote the authors. “[For example], at first glance, it may not seem useful to know someone’s feelings about architecture when assessing their feelings about health care. After all, health care and architecture are independent stimuli with unique sets of properties, so attitudes toward these objects should also be independent.”
            However, they note, there is still one critical factor that an individual’s attitudes will have in common: the individual who formed the attitudes.  “Some people may simply be more prone to focusing on positive features and others on negative features,” Hepler said.  …..  
“This surprising and novel discovery expands attitude theory by demonstrating that an attitude is not simply a function of an object’s properties, but it is also a function of the properties of the individual who evaluates the object,” concluded Hepler and Albarracín. “Overall, the present research provides clear support for the dispositional attitude as a meaningful construct that has important implications for attitude theory and research.”
Abstract:
We hypothesized that individuals may differ in the dispositional tendency to have positive versus negative attitudes, a trait termed the Dispositional Attitude. Across four studies, we developed a 16-item Dispositional Attitude Measure (DAM) and investigated its internal consistency, test-retest reliability, factor structure, convergent validity, discriminant validity, and predictive validity. DAM scores were (a) positively correlated with positive affect traits, curiosity-related traits, and individual pre-existing attitudes, (b) negatively correlated with negative affect traits, and (c) uncorrelated with theoretically unrelated traits. Dispositional attitudes also significantly predicted the valence of novel attitudes while controlling for theoretically relevant traits (such as
the big-five and optimism). The dispositional attitude construct represents a new perspective in which attitudes are not simply a function of the properties of the stimuli under consideration, but are also a function of the properties of the evaluator. We discuss the intriguing implications of dispositional attitudes for many areas of research, including attitude formation, persuasion, and behavior prediction.

Social psychology may be rigorous but it is not a science

August 18, 2013

Scientific American carries an article by a budding psychologist who is upset that many don’t accept that it is a science – but I think she protests too much. I have no doubt that many social psychologists study their discipline with great rigour. And so they should. (And I accept the rigour of most of the researchers in this field notwthstanding the publicity seeking, high profile fraudsters such as Stapel and Hauser who did not).

But it is not any lack of rigour which makes psychology “not a science”. It is the fact that we just don’t know enough about the forces driving our sensory perceptions and our subsequent behaviour (via biochemistry in the body and the brain) to be able to formulate proper falsifiable hypotheses.  Behaviour is fascinating and many of the empirical studies trying to pin down the causes and effects are brilliantly conceived and carried out. But behaviour is complicated and we don’t know the drivers. Inevitably measurement is complicated and messy.

Even the alchemists made rigorous measurements. But they never knew enough to elevate alchemy to a science. And so it is with psychology and with social psychology in particular. We are waiting for the body of evidence to grow and the insight of a John Dalton and a Antoine Lavoisier to lift psychology from an alchemy-like art to the true level of a science.

Her article is interesting but a little too defensive. And she misses the point. Just having rigour in measurement is insufficient to make an art into a science.

Psychology’s brilliant, beautiful, scientific messiness

 Melanie Tannenbaum

Melanie TannenbaumMelanie Tannenbaum is a doctoral candidate in social psychology at the University of Illinois at Urbana-Champaign, where she received an M.A. in social psychology in 2011. Her research focuses on the science of persuasion & motivation regarding political, health-related, and environmental behavior.

Today, sitting down to my Twitter feed, I saw a new link to Dr. Alex Berezow’s old piece on why psychology cannot call itself a science. The piece itself is over a year old, but seeing it linked again today brought up old, angry feelings that I never had the chance to publicly address when the editorial was first published. Others, like Dave Nussbaum, have already done a good job of dismantling the critiques in this article, but the fact that people are still linking to this piece (and that other pieces, even elsewhere on the SciAm Network, are still echoing these same criticisms) means that one thing apparently cannot be said enough:

Psychology is a science.

Shut up about how it’s not, already.

But she gets it almost right in her last paragraph. Indeed psychology is still an art – but that is not additional to its being a science (by definition).

.. The thought, the creativity, the pure brilliance that goes into finding measurable, testable proxies for “fuzzy concepts” so we can experimentally control those indicators and find ways to step closer, every day, towards scientifically studying these abstractions that we once thought we would never be able to study — that’s beautiful. Quite frankly, it’s not just science — it’s an art. And often times, the means that scientists devise to help them step closer and closer towards approximating these abstract concepts, finding different facets to measure or different ways to conceptualize our thoughts, feelings, and behaviors? That process alone is so brilliant, so tricky, and so critical that it’s often worth receiving just as much press time as the findings themselves.

To keep psychology in the realms of art rather than science is not to demean the discipline or to attack the rigour of those working the field. And maybe psychologists should consider why they  get so upset at being called artists rather than scientists and why they wish to be perceived as something they are not.

There is much of the study of psychology which is brilliant and beautiful and messy – but it is not a science – yet.

Closure for Stapel perhaps but social psychology remains “on probation”

June 28, 2013

Another Chapter in the Diedrik Stapel saga comes to an end as he reaches a deal with prosecutors but the exposure of his behaviour has revealed much that is not so uncommon in the field of social psychology. Social psychologists now need to be on their best behaviour to dispell the notion that “fraud” and confirmation bias are their stock-in-trade. Social  Psychology remains on probation and must avoid any hint of misconduct if it is not to lose further ground as an academic discipline ( but it will be quite some time before this discipline becomes a science).

Associated Press (via The Republic): 

THE HAGUE, Netherlands — A disgraced Dutch social psychologist who admitted faking or manipulating data in dozens of publications has agreed to do 120 hours of community service work and forfeit welfare benefits equivalent to 18 months’ salary in exchange for not being prosecuted for fraud.

Prosecutors announced the deal Friday, calling it “a fitting conclusion” to a case of scientific fraud that sent shockwaves through Dutch academia.

Diederik Stapel who formerly worked at universities in the cities of Groningen and Tilburg, acknowledged the fraud in 2011 and issued a public apology last November, saying he had “failed as a scientist.”

He once claimed to have shown that the very act of thinking about eating meat makes people behave more selfishly.

How much of “social-priming” psychology is just made-up?

May 11, 2013

There is a whole industry of social psychologists specialising in – and getting funded for – studying “social priming”. The more astonishing or contra-intuitive the result the more attention, the more publicity and the more funding the researcher seems to get. But it seems that many (maybe most) of these study results are irreproducibleIt is not implausible that priming does (should) affect subsequent behaviour but social psychologists seeking fame through astonishing results (often, it seems, made-up results) have not helped their own cause. The list of questionable “social priming” results is getting quite long:

    • Thinking about a professor just before you take an intelligence test makes you perform better than if you think about football hooligans.
    • people walk more slowly if they are primed with age-related words
    •  A warm mug makes you friendlier.
    • The American flag makes you vote Republican.
    • Fast-food logos make you impatient
    • lonely people take longer and warmer baths and showers, perhaps substituting the warmth of the water for the warmth of regular human interaction

Attention-grabbing results seem to be common among social psychologists of all kinds. A made-up result which says that “the smarter a man is, the less likely he is to cheat on his partner” generates the expected headlines and spots on TV talk shows. Diedrik Stapel made up data to prove that “meat eaters are more selfish than vegetarians”. Dirk Smeesters claimed that “varying the perspective of advertisements from the third person to the first person, such as making it seem as if we were looking out through the TV through our own eyes, makes people weigh certain information more heavily in their consumer choices” and that “manipulating colors such as blue and red can make us bend one way or another”. But Smeesters too has now admitted cherry picking his data. A raft of retractions followed and is still going on.

Nature: 

A paper published in PLoS ONE last week1 reports that nine different experiments failed to replicate this example of ‘intelligence priming’, first described in 1998 (ref. 2) by Ap Dijksterhuis, a social psychologist at Radboud University Nijmegen in the Netherlands, and now included in textbooks.

David Shanks, a cognitive psychologist at University College London, UK, and first author of the paper in PLoS ONE, is among sceptical scientists calling for Dijksterhuis to design a detailed experimental protocol to be carried out indifferent laboratories to pin down the effect. Dijksterhuis has rejected the request, saying that he “stands by the general effect” and blames the failure to replicate on “poor experiments”.

An acrimonious e-mail debate on the subject has been dividing psychologists, who are already jittery about other recent exposures of irreproducible results (see Nature 485, 298–300; 2012). “It’s about more than just replicating results from one paper,” says Shanks, who circulated a draft of his study in October; the failed replications call into question the under­pinnings of ‘unconscious-thought theory’. ….

….. In their paper, Shanks and his colleagues tried to obtain an intelligence-priming effect, following protocols in Dijksterhuis’s papers or refining them to amplify any theoretical effect (for example, by using a test of analytical thinking instead of general knowledge). They also repeated intelligence-priming studies from independent labs. They failed to find any of the described priming effects in their experiments. ……

……. Other high-profile social psychologists whose papers have been disputed in the past two years include John Bargh from Yale University in New Haven, Connecticut. His claims include that people walk more slowly if they are primed with age-related words.

Bargh, Dijksterhuis and their supporters argue that social-priming results are hard to replicate because the slightest change in conditions can affect the outcome. “There are moderators that we are unaware of,” says Dijksterhuis.

But Hal Pashler, a cognitive psychologist at the University of California, San Diego — a long-time critic of social priming — notes that the effects reported in the original papers were huge. “If effects were that strong, it is unlikely they would abruptly disappear with subtle changes in procedure,” he says. ….

CHE: 

This fall, Daniel Kahneman, the Nobel Prize-winning psychologist, sent an e-mail to a small group of psychologists, including Bargh, warning of a “train wreck looming” in the field because of doubts surrounding priming research. He was blunt: “I believe that you should collectively do something about this mess. To deal effectively with the doubts you should acknowledge their existence and confront them straight on, because a posture of defiant denial is self-defeating,” he wrote.

……. Pashler issued a challenge masquerading as a gentle query: “Would you be able to suggest one or two goal priming effects that you think are especially strong and robust, even if they are not particularly well-known?” In other words, put up or shut up. Point me to the stuff you’re certain of and I’ll try to replicate it. This was intended to counter the charge that he and others were cherry-picking the weakest work and then doing a victory dance after demolishing it. He didn’t get the straightforward answer he wanted. “Some suggestions emerged but none were pointing to a concrete example,” he says.

Social psychology and social psychologists have some way to go to avoid being dismissed out of hand as charlatans.

Why is the New York Times publicising fraudster Stapel’s book?

April 30, 2013

I would not have expected the New York Times to be an apologist and a publicist for a fraudster.

The case of Diedrik Stapel and all the data he faked by just making them up to fit his pre-determined results will always bring discredit to the field (not science) of social psychology. But Stapel is now busy creating a new career for himself where his fraud itself is to be the vehicle of his future success. He has written a book about his derailment and the adoring media have not only forgiven him but are now playing an active part in his rehabilitation: in  humanising him and publicisng his book. The con continues and the media are (perhaps unwitting) partners to the con.

The New York Times ran a long “analytical” article about Stapel and his fraud a few days ago. A long interview with Stapel and ostensibly a “neutral” piece the article is entirely concerned with humanising the “criminal”.  It seems to me that Stapel is very successfully continuing to manipulate the media which earlier used to idolise him for his ridiculous “studies” (eating meat made people selfish!). But if you look at the NYT piece as a piece of marketing material for a book written by a discredited author it all makes sense. In fact the NYT article might just as well have been commissioned by the publishers of the book

NYT:  …. Right away Stapel expressed what sounded like heartfelt remorse for what he did to his students. “I have fallen from my throne — I am on the floor,” he said, waving at the ground. “I am in therapy every week. I hate myself.” That afternoon and in later conversations, he referred to himself several times as tall, charming or handsome, less out of arrogance, it seemed, than what I took to be an anxious desire to focus on positive aspects of himself that were demonstrably not false. ….. 

Stapel did not deny that his deceit was driven by ambition. But it was more complicated than that, he told me. He insisted that he loved social psychology but had been frustrated by the messiness of experimental data, which rarely led to clear conclusions. His lifelong obsession with elegance and order, he said, led him to concoct sexy results that journals found attractive. “It was a quest for aesthetics, for beauty — instead of the truth,” he said. He described his behavior as an addiction that drove him to carry out acts of increasingly daring fraud, like a junkie seeking a bigger and better high. ….

The report’s publication would also allow him to release a book he had written in Dutch titled “Ontsporing” — “derailment” in English — for which he was paid a modest advance. The book is an examination of his life based on a personal diary he started after his fraud was made public. Stapel wanted it to bring both redemption and profit, and he seemed not to have given much thought to whether it would help or hurt him in his narrower quest to seek forgiveness from the students and colleagues he duped.

The New York Times : The mind of a con man Published: April 26, 2013

“The book is an examination of his life based on a personal diary he started after his fraud was made public.”  writes our intrepid NYT reporter.

Really? – and how much of this self-serving “diary” was faked or just made up?

Willingly or otherwise, the New York Times (and the reporter Yudhijit Bhattacharjee) are being duped and manipulated by a consummate fraudster.


%d bloggers like this: