Archive for the ‘psychology’ Category

Conclusion that Förster manipulated data is “unavoidable”

May 8, 2014

Retraction Watch has now obtained and translated the report of the investigation by the Dutch National Board for Scientific Integrity (LOWI) into the suspicions about Jens Förster’s research. The conclusions are unavoidable that data manipulation must have taken place and could not have been the result of “sloppy science”.

Here are some of the highlights from the document, which we’ve had translated by a Dutch speaker:

“According to the LOWI, the conclusion that manipulation of the research data has taken place is unavoidable […] intervention must have taken place in the presentation of the results of the … experiment”

“The analyses of the expert … did add to [the Complainant’s report] that these linear patterns were specific for the primary analyses in the … article and did not show in subsets of the data, such as gender groups. [The expert stated that] only goal-oriented intervention in the entire dataset could have led this result.”

“There is an “absence of any form of accountability of the raw data files, as these were collected with questionnaires, and [there is an] absence of a convincing account by the Accused of the way in which the data of the experiments in the previous period were collected and processed.”

“[T]he assistants were not aware of the goal and hypotheses of the experiments [and] the LOWI excludes the possibility that the many research assistants, who were involved in the data collection in these experiments, could have created such exceptional statistical relations.”

What is particularly intriguing is the method of statistical investigation that was applied. Suspicions were not only because the data showed a remarkable linearity but that sub-sets of the data did not. The first suggests confirmation bias (cherry picking) but the second brings data manipulation into play. Non-linearity in sub-sets of data cannot just neatly cancel themselves out giving – fortuitously for the hypothesis – a linearity in the complete data set. The investigation methods are of more value than the Förster paper to be retracted.

I have an aversion to “science” based on questionnaires and “primed” subjects. They are almost as bad as the social psychology studies carried out based on Facebook or Twitter responses. They give results which can rarely be replicated. (I have an inherent suspicion of questionnaires due to my own “nasty” habit of “messing around” with my responses to questionnaires – especially when I am bored or if the questionnaire is a marketing or a political study).

Psychology Today:

Of course priming works— it couldn’t not work. But the lack of control over the information contained in social priming experiments guarantees unreliable outcomes for specific examples.  ..  

This gets worse because social priming studies are typically between-subject designs, and (shock!) different people are even more different from each other than the same people at different times! 

Then there’s also the issue of whether the social primes used across replications are, in fact, the same. It is currently impossible to be sure, because there is no strong theory of what the information is for these primes. In more straight forward perceptual priming (see below) if I present the same stimulus twice I know I’ve presented the same stimulus twice. But the meaning of social information depends not only on what the stimulus is but also who’s giving it and their relationship to the person receiving it, not to mention the state that person is in.

… In social priming, therefore, replicating the design and the stimuli doesn’t actually mean you’ve run the same study. The people are different and there’s just no way to make sure they are all experiencing the same social stimulus, the same information

And results from such studies, if they cannot be replicated, and even if they are the honest results of the study, have no applicability to anything wider than that study.

Förster (continued) – Linearity of data had a 1 in 508×10^18 probability of not being manipulated

May 1, 2014

The report from 2012 detailing the suspicions of manufactured data in 3 of Jens Förster’s papers has now become available. förster 2012 report – eng

The Abstract reads:

Here we analyze results from three recent papers (2009, 2011, 2012) by Dr. Jens Förster from the Psychology Department of the University of Amsterdam. These papers report 40 experiments involving a total of 2284 participants (2242 of which were undergraduates). We apply an F test based on descriptive statistics to test for linearity of means across three levels of the experimental design. Results show that in the vast majority of the 42 independent samples so analyzed, means are unusually close to a linear trend. Combined left-tailed probabilities are 0.000000008, 0.0000004, and 0.000000006, for the three papers, respectively. The combined left-tailed p-value of the entire set is p= 1.96 * 10-21, which corresponds to finding such consistent results (or more consistent results) in one out of 508 trillion (508,000,000,000,000,000,000). Such a level of linearity is extremely unlikely to have arisen from standard sampling. We also found overly consistent results across independent replications in two of the papers. As a control group, we analyze the linearity of results in 10 papers by other authors in the same area. These papers differ strongly from those by Dr. Förster in terms of linearity of effects and the effect sizes. We also note that none of the 2284 participants showed any missing data, dropped out during data collection, or expressed awareness of the deceit used in the experiment, which is atypical for psychological experiments. Combined these results cast serious doubt on the nature of the results reported by Dr. Förster and warrant an investigation of the source and nature of the data he presented in these and other papers.

Förster’s primary thesis in the 3 papers under suspicion is that the global versus local models for perception and processing of data which have been studied and applied for vision are also also valid and apply to the other senses.

1. Förster, J. (2009). Relations Between Perceptual and Conceptual Scope: How Global Versus Local Processing Fits a Focus on Similarity Versus Dissimilarity. Journal of Experimental Psychology: General, 138, 88-111.

2. Förster, J. (2011). Local and Global Cross-Modal Influences Between Vision and Hearing, Tasting, Smelling, or Touching. Journal of Experimental Psychology: General, 140, 364-389.

The University of Amsterdam investigation has called for the third paper to be retracted:

3. Förster, J. & Denzler, M. (2012). Sense Creative! The Impact of Global and Local Vision, Hearing, Touching, Tasting and Smelling on Creative and Analytic Thought.  Social Psychological and Personality Science, 3, 108-117 (The full paper is here: Social Psychological and Personality Science-2012-Förster-108-17 )

Abstract: Holistic (global) versus elemental (local) perception reflects a prominent distinction in psychology; however, so far it has almost entirely been examined in the domain of vision. Current work suggests that global/local processing styles operate across sensory modalities. .As for vision, it is assumed that global processing broadens mental categories in memory, enhancing creativity. Furthermore, local processing should support performance in analytic tasks. Throughout separate 12 studies, participants were asked to look at, listen to, touch, taste or smell details of objects, or to perceive them as wholes. Global processing increased category breadth and creative relative to analytic performance, whereas for local processing the opposite was true. Results suggest that the way we taste, smell, touch, listen to, or look at events affects complex cognition, reflecting procedural embodiment effects. 

My assumption is that if the data have been manipulated it is probably a case of “confirmation bias”.  Global versus local perception is not that easy to define or study for the senses other than vision – which is probably why they have not been studied. Therefore the data may have been “manufactured” to conform with the hypothesis that “the way we taste, smell, touch, listen to, or look at events does affect complex cognition and global processing increases category breadth and creativity relative to analytic performance, whereas local processing decreases them”. The hypothesis becomes the result.

Distinctions between global and local perceptions of hearing are not improbable. But for taste? and smell and touch?? My perception of the field of social psychology (which is still a long way from being a science) is that far too often improbable hypotheses are dreamed up for the effect they have (not least in the media). Data – nearly always by sampling groups of individuals – are then found/manipulated/created to “prove” the hypotheses rather than to disprove them.

My perceptions are not altered when I see results from paper 3 like these:

Our findings may have implications for our daily behaviors. Some objects or people in the real world may unconsciously affect our cognition by triggering global or local processing styles; while some may naturally guide our attention to salient details (e.g., a spot on a jacket, a strong scent of coriander in a soup), others may motivate us to focus on the gestalt (e.g., because they are balanced and no special features stand out). It might be the case then that differences in the composition of dishes, aromas, and other mundane events influence our behavior.We might for example attend more to the local details of the answers by an interview candidate if he wears a bright pink tie, or we may start to become more creative upon tasting a balanced wine. This is because our attention to details versus gestalts triggers different systems that process information in different ways.

The description of the methods used in the paper give me no sense of any scientific rigour –  especially those regarding smell – and I find the entire “experimental method” quite unconvincing.

Participants were seated in individual booths and were instructed to recognize materials by smelling them. A pretest reported in Förster (2011) led to the choice (of) tangerines, fresh soil, and chocolate, which were rated as easily recognizable and neutral to positive in valence (both when given as a mixture but also when given alone). After each trial, participants were asked to wait 1 minute before smelling the next sample. In Study 10a, in the global condition, participants were presented with three small bowls containing a mixture of all three components; whereas in the local condition, the participants were presented with three small bowls, each containing one of the three different ingredients. In the control condition, they had to smell two bowls of mixes and two bowls with pure ingredients (tangerines and soil) in random order.

A science it is certainly not.

Another case of data manipulation, another Dutch psychology scandal

April 30, 2014


Jens Förster denies the claims of misconduct and has sent an email defending himself to Retraction Watch.


One would have thought the credentials of social psychology as a science – after Diedrik Staple, Dirk Smeesters and Mark Hauser – could not fall much lower. But data manipulation in social psychology would seem to be a bottomless pit.

Another case of data manipulation by social psychologists has erupted at the University of Amsterdam. This time by Jens Förster professor of social psychology at the University of Amsterdam and his colleague Markus Denzler. 

Retraction Watch: 

The University of Amsterdam has called for the retraction of a 2011 paper by two psychology researchers after a school investigation concluded that the article contained bogus data, the Dutch press are reporting.

The paper, “Sense Creative! The Impact of Global and Local Vision, Hearing, Touching, Tasting and Smelling on Creative and Analytic Thought,” was written by Jens Förster and Markus Denzler  and published in Social Psychological & Personality Science. ….

Professor Jens Förster

Jens Förster is no lightweight apparently. He is supposed to have research interests in the principles of motivation. Throughout my own career the practice of motivation in the workplace has been a special interest and I have read some of his papers. Now I feel let down. I have a theory that one of the primary motivators of social psychologists in academia is a narcissistic urge for media attention. No shortage of ego. And I note that as part of his webpage detailing his academic accomplishments he also feels it necessary to highlight his TV appearances!!!!

Television Appearances (Selection) 

Nachtcafé (SWR), Buten & Binnen (RB), Hermann & Tietjen (NDR), Euroland (SWF), Menschen der Woche (SWF), Die große Show der Naturwunder (ARD), Quarks & Co (WDR), Plasberg persönlich (WDR), Im Palais (RBB), Westart (WDR)

They love being Darlings of the media and the media oblige!

As a commenter on Retraction Watch points out, Förster also doubles as a cabaret artist! Perhaps he sees his academic endeavours also as a form of entertaining the public.

Rolf Degen: I hope that this will not escalate, as this could get ugly for the field of psychology. Jens Förster, a German, is a bigger name than Stapel ever was. He was repeatedly portrayed in the German media, not the least because of his second calling as a singer and a cabaret artist, and he has published an enormous amount of books, studies and review papers, all high quality stuff

This revelation occurs at a bad time for Förster, write the Dutch media. He is supposed to work as “Humboldt professor starting from June 1, and he was awarded five million Euros to do research at a German university the next five years. He is also supposed to cooperate with Jürgen Margraf – who is the President of the “German Society for Psychology” and as such the highest ranking German psychologist.

Idiot paper of the day: “Math Anxiety and Exposure to Statistics in Messages About Genetically Modified Foods”

February 28, 2014

Roxanne L. Parrott is the Distinguished Professor of Communication Arts and Sciences at Penn State. Reading about this paper is not going to get me to read the whole paper anytime soon. The study the paper is based on – to my mind – is to the discredit of both PennState and the state of being “Distinguished”.

I am not sure what it is but it is not Science.

Kami J. Silk, Roxanne L. Parrott. Math Anxiety and Exposure to Statistics in Messages About Genetically Modified Foods: Effects of Numeracy, Math Self-Efficacy, and Form of PresentationJournal of Health Communication, 2014; 1 DOI: 10.1080/10810730.2013.837549

From the Abstract:

… To advance theoretical and applied understanding regarding health message processing, the authors consider the role of math anxiety, including the effects of math self-efficacy, numeracy, and form of presenting statistics on math anxiety, and the potential effects for comprehension, yielding, and behavioral intentions. The authors also examine math anxiety in a health risk context through an evaluation of the effects of exposure to a message about genetically modified foods on levels of math anxiety. Participants (N = 323) were randomly assigned to read a message that varied the presentation of statistical evidence about potential risks associated with genetically modified foods. Findings reveal that exposure increased levels of math anxiety, with increases in math anxiety limiting yielding. Moreover, math anxiety impaired comprehension but was mediated by perceivers’ math confidence and skills. Last, math anxiety facilitated behavioral intentions. Participants who received a text-based message with percentages were more likely to yield than participants who received either a bar graph with percentages or a combined form. … 

PennState has put out a Press Release:

The researchers, who reported their findings in the online issue of the Journal of Health Communication, recruited 323 university students for the study. The participants were randomly assigned a message that was altered to contain one of three different ways of presenting the statistics: a text with percentages, bar graph and both text and graphs. The statistics were related to three different messages on genetically modified foods, including the results of an animal study, a Brazil nut study and a food recall announcement.

Wow! The effort involved in getting all of 323 students to participate boggles. And taking Math Anxiety as a critical behavioural factor stretches the bounds of rational thought. Could they find nothing better to do? This study is at the edges of academic misconduct.

“This is the first study that we know of to take math anxiety to a health and risk setting,” said Parrott.

It ought also to be the last such idiot study – but I have no great hopes.

Moral in the morning, lying in the evening, cheating by suppertime…

October 30, 2013

Of course it is another paper demonstrating great insight into human behaviour with far reaching conclusions. Needless to say it is a hypothesis dreamed up by social psychologists.

Is it good science? Unlikely. Is it trivial? Undoubtedly. Does it provide real empirical data? Yes. Is it relevant? Hardly.

Is it even science?  

Maryam Kouchaki and Isaac H. Smith, The Morning Morality Effect: The Influence of Time of Day on Unethical BehaviorPsychological Science, October 28, 2013,  doi: 10.1177/0956797613498099

Kouchacki is a post-doctoral research fellow at Harvard University and completed her doctoral studies at the University of Utah, where Smith is a current doctoral student. Kouchaki has been involved with a previous “priming” study about the effect of thinking about money on morality. And as is now well known, most “priming” studies are highly suspect.

It is not for nothing that the the APS journal Psychological Science is the highest ranked empirical journal in psychology.

The authors conducted experiments on college-age participants and on a sample of on-line participants:

  1. … college-age participants were shown various patterns of dots on a computer. For each pattern, they were asked to identify whether more dots were displayed on the left or right side of the screen. Importantly, participants were not given money for getting correct answers, but were instead given money based on which side of the screen they determined had more dots; they were paid 10 times the amount for selecting the right over the left. Participants therefore had a financial incentive to select the right, even if there were unmistakably more dots on the left, which would be a case of clear cheating.
  2. … also tested participants’ moral awareness in both the morning and afternoon by presenting them with word fragments such as “_ _RAL” and “E_ _ _ C_ _”

Their results showed that in line with their hypothesis, participants tested between 8:00 am and 12:00 pm were less likely to cheat than those tested between 12:00 pm and 6:00pm — a phenomenon the researchers call the “morning morality effect.” In the second experiment morning participants were more likely to form the words “moral” and “ethical,” whereas the afternoon participants tended to form the words “coral” and “effects,” lending further support to the morning morality effect.

Clearly the arduous field-work consisted of wandering around their dangerous college campus(es) soliciting subjects and then spending many long-nights on-line to get their “on-line” sample.

…. both undergraduate students and a sample of U.S. adults engaged in less unethical behavior (e.g., less lying and cheating) on tasks performed in the morning than on the same tasks performed in the afternoon. This morning morality effect was mediated by decreases in moral awareness and self-control in the afternoon. Furthermore, the effect of time of day on unethical behavior was found to be stronger for people with a lower propensity to morally disengage. These findings highlight a simple yet pervasive factor (i.e., the time of day) that has important implications for moral behavior.

Presumably a good afternoon nap could restore our moral behaviour in the evenings?

It seems to me that the hypothesis has been designed/invented primarily to grab headlines and to ensure publication.

Nature: “US behavioural researchers exaggerate findings”

August 27, 2013

The field of behaviour within social psychology has not covered itself with glory in recent times. The cases of Diedrik Stapel and Dirk Smeesters and Marc Hauser are all too recent. But I have the perception that the entire field – globally – has been subject to exaggerations and the actions of narcissists. I had not perceived it as being a particular issue just for the US. But I would not be surprised if the “publish or perish” pressure is stronger in the US than in many other countries.

But a  new study publsished in PNAS has ” found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters. Nonbehavioral studies showed no such “US effect” …”

Fanelli, D. & Ioannidis, J. P. A.,US studies may overestimate effect sizes in softer research,  Proc. Natl Acad. Sci. USA  (2013),

Nature reports

US behavioural researchers have been handed a dubious distinction — they are more likely than their colleagues in other parts of the world to exaggerate findings, according to a study published today.

The research highlights the importance of unconscious biases that might affect research integrity, says Brian Martinson, a social scientist at the HealthPartners Institute for Education and Research in Minneapolis, Minnesota, who was not involved with the study. “The take-home here is that the ‘bad guy/good guy’ narrative — the idea that we only need to worry about the monsters out there who are making up data — is naive,” Martinson says.

The study, published in Proceedings of the National Academy of Sciences, was conducted by John Ioannidis, a physician at Stanford University in California, and Daniele Fanelli, an evolutionary biologist at the University of Edinburgh, UK. The pair examined 82 meta-analyses in genetics and psychiatry that collectively combined results from 1,174 individual studies. The researchers compared meta-analyses of studies based on non-behavioural parameters, such as physiological measurements, to those based on behavioural parameters, such as progression of dementia or depression.

The researchers then determined how well the strength of an observed result or effect reported in a given study agreed with that of the meta-analysis in which the study was included. They found that, worldwide, behavioural studies were more likely than non-behavioural studies to report ‘extreme effects’ — findings that deviated from the overall effects reported by the meta-analyses.
 And US-based behavioural researchers were more likely than behavioural researchers elsewhere to report extreme effects that deviated in favour of their starting hypotheses.

“We might call this a ‘US effect,’” Fanelli says. “Researchers in the United States tend to report, on average, slightly stronger results than researchers based elsewhere.”

This ‘US effect’ did not occur in non-behavioral research, and studies with both behavioural and non-behavioural components exhibited slightly less of the effect than purely behavioural research. Fanelli and Ioannidis interpret this finding to mean that US researchers are more likely to report strong effects, and that this tendency is more likely to show up in behavioural research, because researchers in these fields have more flexibility to make different methodological choices that produce more diverse results.

The study looked at a larger volume of research than has been examined in previous studies on bias in behavioural research, says Brian Nosek, a psychologist at the University of Virginia in Charlottesville. ….. 


Many biases affect scientific research, causing a waste of resources, posing a threat to human health, and hampering scientific progress. These problems are hypothesized to be worsened by lack of consensus on theories and methods, by selective publication processes, and by career systems too heavily oriented toward productivity, such as those adopted in the United States (US). Here, we extracted 1,174 primary outcomes appearing in 82 meta-analyses published in health-related biological and behavioral research sampled from the Web of Science categories Genetics & Heredity and Psychiatry and measured how individual results deviated from the overall summary effect size within their respective meta-analysis. We found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters. Nonbehavioral studies showed no such “US effect” and were subject mainly to sampling variance and small-study effects, which were stronger for non-US countries. Although this latter finding could be interpreted as a publication bias against non-US authors, the US effect observed in behavioral research is unlikely to be generated by editorial biases. Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.

Related: Retraction Watch

Made-up science: “Liking” or “disliking” in general is a personality trait!

August 27, 2013

This comes into the category – not of bad science – but what I would call “made-up science” where something fairly trivial and obvious is made sufficiently complicated to be addressed by “scientific method”.

It is apparently called “dispositional attitude” and it has a 16-item scale to measure an individual’s propensity to generally like or dislike any stimulii! This surprising and novel discovery expands attitude theory by demonstrating that an attitude is not simply a function of an object’s properties, but it is also a function of the properties of the individual who evaluates the object,”  So a “liker” likes everything and a “hater” hates everything!

“Dispositional Attitude” seems neither surprising nor so very novel. Not so very different from what has been called the “Observer Effect” in physics or the “actor-observer assymetry” in attribution theory. It is unnecessarily trying to complicate what is little more than a cliche. Beauty – or liking or hating – lies in the eye of  the beholder and if your personality wears rose-coloured glasses – surprise, surprise – everything appears red.

Justin Hepler & Dolores Albarracin, “Attitudes without objects: Evidence for a dispositional attitude, its measurement, and its consequences,”J Pers Soc Psychol. 2013 Jun;104(6):1060-76. doi: 10.1037/a0032282. Epub 2013 Apr 15.

The Annenberg School of Communication at the University of Pennsylvania has come out with this Press Release:

New research has uncovered the reason why some people seem to dislike everything while others seem to like everything. Apparently, it’s all part of our individual personality – a dimension that researchers have coined “dispositional attitude.”
            People with a positive dispositional attitude have a strong tendency to like things, whereas people with a negative dispositional attitude have a strong tendency to dislike things, according to research published in the Journal of Personality and Social Psychology. The journal article, “Attitudes without objects: Evidence for a dispositional attitude, its measurement, and its consequences,” was written by Justin Hepler, University of Illinois at Urbana-Champaign, and Dolores Albarracín, Ph.D., the Martin Fishbein Chair of Communication and Professor of Psychology at Penn.
            “The dispositional attitude construct represents a new perspective in which attitudes are not simply a function of the properties of the stimuli under consideration, but are also a function of the properties of the evaluator,” wrote the authors. “[For example], at first glance, it may not seem useful to know someone’s feelings about architecture when assessing their feelings about health care. After all, health care and architecture are independent stimuli with unique sets of properties, so attitudes toward these objects should also be independent.”
            However, they note, there is still one critical factor that an individual’s attitudes will have in common: the individual who formed the attitudes.  “Some people may simply be more prone to focusing on positive features and others on negative features,” Hepler said.  …..  
“This surprising and novel discovery expands attitude theory by demonstrating that an attitude is not simply a function of an object’s properties, but it is also a function of the properties of the individual who evaluates the object,” concluded Hepler and Albarracín. “Overall, the present research provides clear support for the dispositional attitude as a meaningful construct that has important implications for attitude theory and research.”
We hypothesized that individuals may differ in the dispositional tendency to have positive versus negative attitudes, a trait termed the Dispositional Attitude. Across four studies, we developed a 16-item Dispositional Attitude Measure (DAM) and investigated its internal consistency, test-retest reliability, factor structure, convergent validity, discriminant validity, and predictive validity. DAM scores were (a) positively correlated with positive affect traits, curiosity-related traits, and individual pre-existing attitudes, (b) negatively correlated with negative affect traits, and (c) uncorrelated with theoretically unrelated traits. Dispositional attitudes also significantly predicted the valence of novel attitudes while controlling for theoretically relevant traits (such as
the big-five and optimism). The dispositional attitude construct represents a new perspective in which attitudes are not simply a function of the properties of the stimuli under consideration, but are also a function of the properties of the evaluator. We discuss the intriguing implications of dispositional attitudes for many areas of research, including attitude formation, persuasion, and behavior prediction.

“Preoccupied” and “fearful” types use Facebook for partner surveillance

August 23, 2013

Yet another Facebook survey. This time to try and discern types of people who use Facebook to monitor their partners. It’s all data I suppose. But I’m not sure if a plethora of little surveys such as this one (328 college students surveyed) allows greater insight or just muddies the ever expanding pool of “data”.

It is probably advisable to keep a large bucket of salt handy when looking at the conclusions of Facebook surveys.

In any case there are some new terms to show up my ignorance. IES stands for Interpersonal Electronic Surveillance and SNS stands for Social Networking Sites. There are apparently four distinct attachment stylessecurepreoccupieddismissing, and fearful. Relationship Uncertainty ia also a parameter to bear in mind.

The authors surveyed 328 college students who were Facebook users and tested 3 hypotheses:

  1. H1: Higher levels of relationship uncertainty will be associated with greater IES of the current or ex-partner.
  2. H2: Preoccupied individuals will report greater relationship uncertainty than secure, dismissing, or fearful individuals.
  3. H3: Preoccupied individuals will report greater IES than secure, dismissing, or fearful individuals.

It seems that Relationship Uncertainty – surprisingly – was not a predictor for IES. However preoccupied and fearful types were more likely to carry out surveillance of their partners via Facebook.

Jesse Fox and Katie M. Warber, Social Networking Sites in Romantic Relationships: Attachment, Uncertainty, and Partner Surveillance on FacebookCyberpsychology, Behavior, and Social Networking, doi:10.1089/cyber.2012.0667

Abstract:Social networking sites serve as both a source of information and a source of tension between romantic partners. Previous studies have investigated the use of Facebook for monitoring former and current romantic partners, but why certain individuals engage in this behavior has not been fully explained. College students (N=328) participated in an online survey that examined two potential explanatory variables for interpersonal electronic surveillance (IES) of romantic partners: attachment style and relational uncertainty. Attachment style predicted both uncertainty and IES, with preoccupieds and fearfuls reporting the highest levels. Uncertainty did not predict IES, however. Future directions for research on romantic relationships and online surveillance are explored.

It will not perhaps come as a complete surprise that for preoccupied and fearful types, their surveillance of their partners may well reinforce their preoccupations and their fears.

From Discussion:

This study contributed to recent research on attachment and new media technologies, and revealed that attachment theory is an effective framework for understanding interpersonal electronic surveillance between romantic partners and ex-partners on Facebook. Likely due to their high levels of relationship anxiety, preoccupied and fearful individuals experienced the highest levels of relational uncertainty and engaged in the highest levels of IES. Previous studies have noted the prevalence of using Facebook to monitor partners, and this study shed light on those findings by recognizing the role of attachment style in this process.It is important to recognize who engages in IES because it may affect levels of satisfaction, stability, and security within the relationship. Preoccupied and fearful individuals often identify or create problems in their relationship due to their levels of anxiety. Given the additional information available about one’s partner and their social interactions, Facebook may exacerbate preoccupieds’ and fearfuls’ anxiety about the relationship. For example, they might be more likely to interpret ambiguous content on Facebook in a negative way, which may create conflict or strain the relationship. ……

….. The lack of a relationship between uncertainty and IES was surprising. However, Muise et al. also found no relationship between relational uncertainty and Facebook-related jealousy. This finding may be an artifact of the sample, however; many college students may perceive their relationships as transient. Thus, although they are uncertain about the relationship, it may not concern them or influence their Facebook behaviors. Future studies should investigate different variables such as the desire to be in a relationship with the partner.

It was interesting that preoccupieds did not differ from fearful individuals in their levels of uncertainty or IES, but it may be because it is attachment-related anxiety rather than avoidance that predicts these outcomes. Our findings mirror previous studies on attachment which have shown that anxious attachment leads to more distress and partner monitoring after breakups. Facebook may appeal to these two types for different reasons. Preoccupieds might feel more control and closeness by using Facebook. Because fearfuls are both anxious and avoidant, Facebook may provide them with the perfect opportunity to monitor the partner and perceived relational threats passively without having to interact with or confront him or her directly. Future research should investigate different attachment styles’ motivations to engage in IES.

Social psychology may be rigorous but it is not a science

August 18, 2013

Scientific American carries an article by a budding psychologist who is upset that many don’t accept that it is a science – but I think she protests too much. I have no doubt that many social psychologists study their discipline with great rigour. And so they should. (And I accept the rigour of most of the researchers in this field notwthstanding the publicity seeking, high profile fraudsters such as Stapel and Hauser who did not).

But it is not any lack of rigour which makes psychology “not a science”. It is the fact that we just don’t know enough about the forces driving our sensory perceptions and our subsequent behaviour (via biochemistry in the body and the brain) to be able to formulate proper falsifiable hypotheses.  Behaviour is fascinating and many of the empirical studies trying to pin down the causes and effects are brilliantly conceived and carried out. But behaviour is complicated and we don’t know the drivers. Inevitably measurement is complicated and messy.

Even the alchemists made rigorous measurements. But they never knew enough to elevate alchemy to a science. And so it is with psychology and with social psychology in particular. We are waiting for the body of evidence to grow and the insight of a John Dalton and a Antoine Lavoisier to lift psychology from an alchemy-like art to the true level of a science.

Her article is interesting but a little too defensive. And she misses the point. Just having rigour in measurement is insufficient to make an art into a science.

Psychology’s brilliant, beautiful, scientific messiness

 Melanie Tannenbaum

Melanie TannenbaumMelanie Tannenbaum is a doctoral candidate in social psychology at the University of Illinois at Urbana-Champaign, where she received an M.A. in social psychology in 2011. Her research focuses on the science of persuasion & motivation regarding political, health-related, and environmental behavior.

Today, sitting down to my Twitter feed, I saw a new link to Dr. Alex Berezow’s old piece on why psychology cannot call itself a science. The piece itself is over a year old, but seeing it linked again today brought up old, angry feelings that I never had the chance to publicly address when the editorial was first published. Others, like Dave Nussbaum, have already done a good job of dismantling the critiques in this article, but the fact that people are still linking to this piece (and that other pieces, even elsewhere on the SciAm Network, are still echoing these same criticisms) means that one thing apparently cannot be said enough:

Psychology is a science.

Shut up about how it’s not, already.

But she gets it almost right in her last paragraph. Indeed psychology is still an art – but that is not additional to its being a science (by definition).

.. The thought, the creativity, the pure brilliance that goes into finding measurable, testable proxies for “fuzzy concepts” so we can experimentally control those indicators and find ways to step closer, every day, towards scientifically studying these abstractions that we once thought we would never be able to study — that’s beautiful. Quite frankly, it’s not just science — it’s an art. And often times, the means that scientists devise to help them step closer and closer towards approximating these abstract concepts, finding different facets to measure or different ways to conceptualize our thoughts, feelings, and behaviors? That process alone is so brilliant, so tricky, and so critical that it’s often worth receiving just as much press time as the findings themselves.

To keep psychology in the realms of art rather than science is not to demean the discipline or to attack the rigour of those working the field. And maybe psychologists should consider why they  get so upset at being called artists rather than scientists and why they wish to be perceived as something they are not.

There is much of the study of psychology which is brilliant and beautiful and messy – but it is not a science – yet.

Social media anonymity encourages and nurtures the herd mentality

August 14, 2013

It seems to me that the anonymity afforded by social media encourages and nurtures the “herd” mentality in human behaviour. A herd mentality is the essence of “mob behaviour” and it would seem that social media – like mobs – remove or suppress the controls and judgement calls that individual behaviour is usually subject to. I suspect that it is the anonymity available together with the potential for a “flash, online crowd”  which together contribute to reaching the “critical mass” needed for the establishment of an “unthinking mob”.

Mob behaviour is characterised by being reactive and where individuals try to “outdo” the behaviour of their fellows under the cover of being anonymous. But it needs a sufficient number of individuals to reach some critical mass to qualify as a mob. It is visible in the positive sense during rapturous calls for an encore after a concert and in the reaction to high oratory. Or it shows up in a negative way in the behaviour of a lynch mob or in the reaction to the speech of a demagogue. It has shown up in the on-line, “mob-bullying” by social media of some vulnerable teenagers which has even led to their suicides. It shows up with the internet trollls hovering on the fringes looking for a “mob” to join on-line.

A member of a mob gains anonymity in the crowd and his individual actions – while contributing to the behaviour of the mob as a whole – are no longer identifiable as the actions of a specific individual. More importantly the individual behaviour is not subject to identification or to being sanctioned. Just as with a stampeding herd of impala being chased by a predator, it is anonymity and running faster than your neighbour but still staying within the mob which provides this perception of protection. It is this feeling of being protected – I think – which switches off the normal human need for risk assessment and rational judgement to be applied before actions and which shifts behaviour away from the conscious plane. Aping and “outdoing” your “neighbour” from within the mob is then prioritised over the exercise of mind and judgement.

A new study shows that what we “like” on social media clearly exhibits a “herd mentality” and depends mainly on what others before us and around us have “liked”. It seems that random “dislikes” however are compensated for.

Lev Muchnik, Sinan Aral and Sean J. Taylor, Social influence bias: a randomized experiment. Science. Vol. 341, 9 August 2013, p. 647. doi: 10.1126/science.1240466

(The paper is paywalled but there is a related discussion here  with the authors about “The effect of free access on the diffusion of scholarly ideas”)

AbstractOur society is increasingly relying on the digitized, aggregated opinions of others to make decisions. We therefore designed and analyzed a large-scale randomized experiment on a social news aggregation Web site to investigate whether knowledge of such aggregates distorts decision-making. Prior ratings created significant bias in individual rating behavior, and positive and negative social influences created asymmetric herding effects. Whereas negative social influence inspired users to correct manipulated ratings, positive social influence increased the likelihood of positive ratings by 32% and created accumulating positive herding that increased final ratings by 25% on average. This positive herding was topic-dependent and affected by whether individuals were viewing the opinions of friends or enemies. A mixture of changing opinion and greater turnout under both manipulations together with a natural tendency to up-vote on the site combined to create the herding effects. Such findings will help interpret collective judgment accurately and avoid social influence bias in collective intelligence in the future.

ScienceNews writes:

When rating things online, people tend to follow the herd. A single random “like” can influence a comment’s score at a social news site, researchers report in the Aug. 9 Science.

Users of the site discuss news articles and rate each other’s comments with “up votes” (positive ratings) and “down votes” (negative ratings). Votes affect each comment’s overall score. To test whether previous ratings sway users, Sinan Aral of MIT and colleagues randomly assigned all comments submitted to the site over a five-month period an up vote, a down vote or no vote.

An unearned up vote packed a surprising punch. The first person to view a randomly liked comment was 32 percent more likely to rate it positively than to do the same with a comment that had received no vote. In the long run, boosted comments’ final scores were 25 percent higher than scores of untouched comments. Random negative votes did not affect a comment’s final rating because users compensated with extra up votes.

The findings may help researchers analyze herding behavior or manipulation in other kinds of rating systems, including electoral polls and stock market predictions, the authors suggest.

%d bloggers like this: