Archive for the ‘Social Science’ Category

Social class is necessary for any society and does not have to be unjust

September 28, 2016

One of the great “politically correct” myths is that people are born equal. A fundamental strength of the human race is that we are all unique individuals and not identical copies rolling off a production line. Our genes fix the envelope of our potential capabilities, and our upbringing determines to what extent we fulfill our potentials. The fundamental fallacy in Marxist theory is in the assumption that a classless society is desirable. In fact it is not even possible. Forcing or coercing unequal people to be “equal” is always unjust.

Some have considered class a “necessary evil” and most social theories assume that having classes is, in itself, unfair and a “bad thing”. But there is no example of a successful (sustainable and growing) society which has not had social classes of some kind. Ruling and ruled, rich or poor, aristocracy and peasants, masters and slaves, the political class and the great unwashed, workers and bosses, union members and others, employers and employees, producers and consumers, Brahmins and the Dalits. The Guilds were about capability and competence to begin with but later became contaminated when they became “closed”. Secret societies grew to try and create new classes which cut across other class boundaries. My hypothesis is that in any society, the inherent variations in human capabilities and competences make social classes both inevitable and necessary. Human diversity (genetic and epigenetic) is (I assume) a fundamental component for the success of the human race (again defined as being sustainable and growing). That diversity is what makes people unequal. The inequality is not, in itself, unjust. It just is. We are not clones  – thank goodness.

Since the French revolution, “egalite” has been made into a fashionable – but false god. A search for “equality” is not just incompatible with, it is also opposed, to a search for justice. It is just for a sick person to receive more care than a healthy person but it is unequal. Affirmative action may be one way of approaching fairness – but it is unequal. The “better man wins” in the Olympics is a celebration of the inherent inequality among humans. If we wanted equality of result, Usain Bolt would have to be handicapped (about 10 m would do). The capitalist goal of “to each as he deserves” and the socialist objective of “to each as he needs” are both expressions which inherently acknowledge the reality of inequality. They both seek their definitions of what is just – not what is equal.

The real issue is not, I think, to seek a classless condition which would cause society to break down, but to achieve classes which are not unjust. Classes will appear as a natural consequence of humans being gregarious. The real solution, which may well have to be a dynamic solution to fit the times, is to design the class system to be used, rather than let it appear by default. Most of the perceived injustices of class are connected to either the classes being hereditary or because movement between the classes is forbidden. The Indian caste system is grossly unjust because it is both hereditary and it forbids movement between castes. Having a class system does not necessitate oppression or injustice. At any given time, even the much vaunted “open” Swedish society also has its functioning classes, but to its credit – and even though there is a not an insignificant hereditary component – movement of individuals across class lines is possible, regular and continuous.

The real question is what attributes to use in defining classes which help a society to function and which are not unjust. It cannot be along just hereditary lines and it cannot be just based on wealth. However any class system must be able to accommodate the realities of ancestry and wealth. Parents will always seek to give their children an advantage  and wealth will always be able to purchase more. Whatever classes we invent must be capable of juxtaposing different levels of wealth within each class and must allow membership from any parentage. It should be possible to move from one class to another.

My choice of class system would then be one where the classes themselves did not create a hierarchy and where the main classification criterion would be based on the predominant, gainful occupation of the individual.  Each class would have its share of rich and poor, idiots and geniuses, and its share of parasites. Classification would not be until the prefrontal cortex was fully developed (at 25). Everybody under 25 would be a non-adult and classless. Marriage across class lines would be permitted. Voting would then be restricted to adults.

I think 5 classes will do.

5-classes


 

Advertisements

Over half of psychology papers are just “psychobabble”

August 28, 2015

A new research report has been published in Science and only confirms the perception that much of what passes for the science of psychology is still mainly the views and prejudices of the publishing psychologists. Their studies cannot be reproduced in over half of the 100 papers investigated. Even among those found to be reproducible, the significance of the results were exaggerated.

……. he researchers found that some of the attempted replications even produced the opposite effect to the one originally reported.

This discipline is surely a valid field of study but is still a long way from being a science.

Estimating the reproducibility of psychological science, Science 28 August 2015, Vol. 349 no. 6251 , DOI: 10.1126/science.aac4716

AbstractReproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.

The Independent comments:

More than half of the findings from 100 different studies published in leading, peer-reviewed psychology journals cannot be reproduced by other researchers who followed the same methodological protocol.

A study by more than 270 researchers from around the world has found that just 39 per cent of the claims made in psychology papers published in three prominent journals could be reproduced unambiguously – and even then they were found to be less significant statistically than the original findings. ……. 

……… Professor Nosek said that there is often a contradiction between the incentives and motives of researchers – whether in psychology or other fields of science – and the need to ensure that their research findings can be reproduced by other scientists.

“Scientists aim to contribute reliable knowledge, but also need to produce results that help them keep their job as a researcher. To thrive in science, researchers need to earn publications, and some kind of results are easier to publish than others, particularly ones that are novel and show unexpected or exciting new directions,” he said.

However, the researchers found that some of the attempted replications even produced the opposite effect to the one originally reported. Many psychological associations and journals are not trying to improve reproducibility and openness, the researchers said.

“This very well done study shows that psychology has nothing to be proud of when it comes to replication,” Charles Gallistel, president of the Association for Psychological Science, told Science.

We have professional psychologists who get paid for their theories and we have professional, amateur psychologists (Agony Aunts in the newspapers, TV and Radio psychologists, talk show pundits and the like) who also get paid for providing entertainment. And then we have all the rest of us who each believe we have insights into the human mind and human behaviour, but don’t get paid for it.

We haven’t come so far from witch-doctors and Shamans.

 

Conclusion that Förster manipulated data is “unavoidable”

May 8, 2014

Retraction Watch has now obtained and translated the report of the investigation by the Dutch National Board for Scientific Integrity (LOWI) into the suspicions about Jens Förster’s research. The conclusions are unavoidable that data manipulation must have taken place and could not have been the result of “sloppy science”.

Here are some of the highlights from the document, which we’ve had translated by a Dutch speaker:

“According to the LOWI, the conclusion that manipulation of the research data has taken place is unavoidable […] intervention must have taken place in the presentation of the results of the … experiment”

“The analyses of the expert … did add to [the Complainant’s report] that these linear patterns were specific for the primary analyses in the … article and did not show in subsets of the data, such as gender groups. [The expert stated that] only goal-oriented intervention in the entire dataset could have led this result.”

“There is an “absence of any form of accountability of the raw data files, as these were collected with questionnaires, and [there is an] absence of a convincing account by the Accused of the way in which the data of the experiments in the previous period were collected and processed.”

“[T]he assistants were not aware of the goal and hypotheses of the experiments [and] the LOWI excludes the possibility that the many research assistants, who were involved in the data collection in these experiments, could have created such exceptional statistical relations.”

What is particularly intriguing is the method of statistical investigation that was applied. Suspicions were not only because the data showed a remarkable linearity but that sub-sets of the data did not. The first suggests confirmation bias (cherry picking) but the second brings data manipulation into play. Non-linearity in sub-sets of data cannot just neatly cancel themselves out giving – fortuitously for the hypothesis – a linearity in the complete data set. The investigation methods are of more value than the Förster paper to be retracted.

I have an aversion to “science” based on questionnaires and “primed” subjects. They are almost as bad as the social psychology studies carried out based on Facebook or Twitter responses. They give results which can rarely be replicated. (I have an inherent suspicion of questionnaires due to my own “nasty” habit of “messing around” with my responses to questionnaires – especially when I am bored or if the questionnaire is a marketing or a political study).

Psychology Today:

Of course priming works— it couldn’t not work. But the lack of control over the information contained in social priming experiments guarantees unreliable outcomes for specific examples.  ..  

This gets worse because social priming studies are typically between-subject designs, and (shock!) different people are even more different from each other than the same people at different times! 

Then there’s also the issue of whether the social primes used across replications are, in fact, the same. It is currently impossible to be sure, because there is no strong theory of what the information is for these primes. In more straight forward perceptual priming (see below) if I present the same stimulus twice I know I’ve presented the same stimulus twice. But the meaning of social information depends not only on what the stimulus is but also who’s giving it and their relationship to the person receiving it, not to mention the state that person is in.

… In social priming, therefore, replicating the design and the stimuli doesn’t actually mean you’ve run the same study. The people are different and there’s just no way to make sure they are all experiencing the same social stimulus, the same information

And results from such studies, if they cannot be replicated, and even if they are the honest results of the study, have no applicability to anything wider than that study.

Förster (continued) – Linearity of data had a 1 in 508×10^18 probability of not being manipulated

May 1, 2014

The report from 2012 detailing the suspicions of manufactured data in 3 of Jens Förster’s papers has now become available. förster 2012 report – eng

The Abstract reads:

Here we analyze results from three recent papers (2009, 2011, 2012) by Dr. Jens Förster from the Psychology Department of the University of Amsterdam. These papers report 40 experiments involving a total of 2284 participants (2242 of which were undergraduates). We apply an F test based on descriptive statistics to test for linearity of means across three levels of the experimental design. Results show that in the vast majority of the 42 independent samples so analyzed, means are unusually close to a linear trend. Combined left-tailed probabilities are 0.000000008, 0.0000004, and 0.000000006, for the three papers, respectively. The combined left-tailed p-value of the entire set is p= 1.96 * 10-21, which corresponds to finding such consistent results (or more consistent results) in one out of 508 trillion (508,000,000,000,000,000,000). Such a level of linearity is extremely unlikely to have arisen from standard sampling. We also found overly consistent results across independent replications in two of the papers. As a control group, we analyze the linearity of results in 10 papers by other authors in the same area. These papers differ strongly from those by Dr. Förster in terms of linearity of effects and the effect sizes. We also note that none of the 2284 participants showed any missing data, dropped out during data collection, or expressed awareness of the deceit used in the experiment, which is atypical for psychological experiments. Combined these results cast serious doubt on the nature of the results reported by Dr. Förster and warrant an investigation of the source and nature of the data he presented in these and other papers.

Förster’s primary thesis in the 3 papers under suspicion is that the global versus local models for perception and processing of data which have been studied and applied for vision are also also valid and apply to the other senses.

1. Förster, J. (2009). Relations Between Perceptual and Conceptual Scope: How Global Versus Local Processing Fits a Focus on Similarity Versus Dissimilarity. Journal of Experimental Psychology: General, 138, 88-111.

2. Förster, J. (2011). Local and Global Cross-Modal Influences Between Vision and Hearing, Tasting, Smelling, or Touching. Journal of Experimental Psychology: General, 140, 364-389.

The University of Amsterdam investigation has called for the third paper to be retracted:

3. Förster, J. & Denzler, M. (2012). Sense Creative! The Impact of Global and Local Vision, Hearing, Touching, Tasting and Smelling on Creative and Analytic Thought.  Social Psychological and Personality Science, 3, 108-117 (The full paper is here: Social Psychological and Personality Science-2012-Förster-108-17 )

Abstract: Holistic (global) versus elemental (local) perception reflects a prominent distinction in psychology; however, so far it has almost entirely been examined in the domain of vision. Current work suggests that global/local processing styles operate across sensory modalities. .As for vision, it is assumed that global processing broadens mental categories in memory, enhancing creativity. Furthermore, local processing should support performance in analytic tasks. Throughout separate 12 studies, participants were asked to look at, listen to, touch, taste or smell details of objects, or to perceive them as wholes. Global processing increased category breadth and creative relative to analytic performance, whereas for local processing the opposite was true. Results suggest that the way we taste, smell, touch, listen to, or look at events affects complex cognition, reflecting procedural embodiment effects. 

My assumption is that if the data have been manipulated it is probably a case of “confirmation bias”.  Global versus local perception is not that easy to define or study for the senses other than vision – which is probably why they have not been studied. Therefore the data may have been “manufactured” to conform with the hypothesis that “the way we taste, smell, touch, listen to, or look at events does affect complex cognition and global processing increases category breadth and creativity relative to analytic performance, whereas local processing decreases them”. The hypothesis becomes the result.

Distinctions between global and local perceptions of hearing are not improbable. But for taste? and smell and touch?? My perception of the field of social psychology (which is still a long way from being a science) is that far too often improbable hypotheses are dreamed up for the effect they have (not least in the media). Data – nearly always by sampling groups of individuals – are then found/manipulated/created to “prove” the hypotheses rather than to disprove them.

My perceptions are not altered when I see results from paper 3 like these:

Our findings may have implications for our daily behaviors. Some objects or people in the real world may unconsciously affect our cognition by triggering global or local processing styles; while some may naturally guide our attention to salient details (e.g., a spot on a jacket, a strong scent of coriander in a soup), others may motivate us to focus on the gestalt (e.g., because they are balanced and no special features stand out). It might be the case then that differences in the composition of dishes, aromas, and other mundane events influence our behavior.We might for example attend more to the local details of the answers by an interview candidate if he wears a bright pink tie, or we may start to become more creative upon tasting a balanced wine. This is because our attention to details versus gestalts triggers different systems that process information in different ways.

The description of the methods used in the paper give me no sense of any scientific rigour –  especially those regarding smell – and I find the entire “experimental method” quite unconvincing.

Participants were seated in individual booths and were instructed to recognize materials by smelling them. A pretest reported in Förster (2011) led to the choice (of) tangerines, fresh soil, and chocolate, which were rated as easily recognizable and neutral to positive in valence (both when given as a mixture but also when given alone). After each trial, participants were asked to wait 1 minute before smelling the next sample. In Study 10a, in the global condition, participants were presented with three small bowls containing a mixture of all three components; whereas in the local condition, the participants were presented with three small bowls, each containing one of the three different ingredients. In the control condition, they had to smell two bowls of mixes and two bowls with pure ingredients (tangerines and soil) in random order.

A science it is certainly not.

Another case of data manipulation, another Dutch psychology scandal

April 30, 2014

UPDATE!

Jens Förster denies the claims of misconduct and has sent an email defending himself to Retraction Watch.

============================

One would have thought the credentials of social psychology as a science – after Diedrik Staple, Dirk Smeesters and Mark Hauser – could not fall much lower. But data manipulation in social psychology would seem to be a bottomless pit.

Another case of data manipulation by social psychologists has erupted at the University of Amsterdam. This time by Jens Förster professor of social psychology at the University of Amsterdam and his colleague Markus Denzler. 

Retraction Watch: 

The University of Amsterdam has called for the retraction of a 2011 paper by two psychology researchers after a school investigation concluded that the article contained bogus data, the Dutch press are reporting.

The paper, “Sense Creative! The Impact of Global and Local Vision, Hearing, Touching, Tasting and Smelling on Creative and Analytic Thought,” was written by Jens Förster and Markus Denzler  and published in Social Psychological & Personality Science. ….

Professor Jens Förster

Jens Förster is no lightweight apparently. He is supposed to have research interests in the principles of motivation. Throughout my own career the practice of motivation in the workplace has been a special interest and I have read some of his papers. Now I feel let down. I have a theory that one of the primary motivators of social psychologists in academia is a narcissistic urge for media attention. No shortage of ego. And I note that as part of his webpage detailing his academic accomplishments he also feels it necessary to highlight his TV appearances!!!!

Television Appearances (Selection) 

Nachtcafé (SWR), Buten & Binnen (RB), Hermann & Tietjen (NDR), Euroland (SWF), Menschen der Woche (SWF), Die große Show der Naturwunder (ARD), Quarks & Co (WDR), Plasberg persönlich (WDR), Im Palais (RBB), Westart (WDR)

They love being Darlings of the media and the media oblige!

As a commenter on Retraction Watch points out, Förster also doubles as a cabaret artist! Perhaps he sees his academic endeavours also as a form of entertaining the public.

Rolf Degen: I hope that this will not escalate, as this could get ugly for the field of psychology. Jens Förster, a German, is a bigger name than Stapel ever was. He was repeatedly portrayed in the German media, not the least because of his second calling as a singer and a cabaret artist, and he has published an enormous amount of books, studies and review papers, all high quality stuff

This revelation occurs at a bad time for Förster, write the Dutch media. He is supposed to work as “Humboldt professor starting from June 1, and he was awarded five million Euros to do research at a German university the next five years. He is also supposed to cooperate with Jürgen Margraf – who is the President of the “German Society for Psychology” and as such the highest ranking German psychologist.

Idiot paper of the day: “Math Anxiety and Exposure to Statistics in Messages About Genetically Modified Foods”

February 28, 2014

Roxanne L. Parrott is the Distinguished Professor of Communication Arts and Sciences at Penn State. Reading about this paper is not going to get me to read the whole paper anytime soon. The study the paper is based on – to my mind – is to the discredit of both PennState and the state of being “Distinguished”.

I am not sure what it is but it is not Science.

Kami J. Silk, Roxanne L. Parrott. Math Anxiety and Exposure to Statistics in Messages About Genetically Modified Foods: Effects of Numeracy, Math Self-Efficacy, and Form of PresentationJournal of Health Communication, 2014; 1 DOI: 10.1080/10810730.2013.837549

From the Abstract:

… To advance theoretical and applied understanding regarding health message processing, the authors consider the role of math anxiety, including the effects of math self-efficacy, numeracy, and form of presenting statistics on math anxiety, and the potential effects for comprehension, yielding, and behavioral intentions. The authors also examine math anxiety in a health risk context through an evaluation of the effects of exposure to a message about genetically modified foods on levels of math anxiety. Participants (N = 323) were randomly assigned to read a message that varied the presentation of statistical evidence about potential risks associated with genetically modified foods. Findings reveal that exposure increased levels of math anxiety, with increases in math anxiety limiting yielding. Moreover, math anxiety impaired comprehension but was mediated by perceivers’ math confidence and skills. Last, math anxiety facilitated behavioral intentions. Participants who received a text-based message with percentages were more likely to yield than participants who received either a bar graph with percentages or a combined form. … 

PennState has put out a Press Release:

The researchers, who reported their findings in the online issue of the Journal of Health Communication, recruited 323 university students for the study. The participants were randomly assigned a message that was altered to contain one of three different ways of presenting the statistics: a text with percentages, bar graph and both text and graphs. The statistics were related to three different messages on genetically modified foods, including the results of an animal study, a Brazil nut study and a food recall announcement.

Wow! The effort involved in getting all of 323 students to participate boggles. And taking Math Anxiety as a critical behavioural factor stretches the bounds of rational thought. Could they find nothing better to do? This study is at the edges of academic misconduct.

“This is the first study that we know of to take math anxiety to a health and risk setting,” said Parrott.

It ought also to be the last such idiot study – but I have no great hopes.

Moral in the morning, lying in the evening, cheating by suppertime…

October 30, 2013

Of course it is another paper demonstrating great insight into human behaviour with far reaching conclusions. Needless to say it is a hypothesis dreamed up by social psychologists.

Is it good science? Unlikely. Is it trivial? Undoubtedly. Does it provide real empirical data? Yes. Is it relevant? Hardly.

Is it even science?  

Maryam Kouchaki and Isaac H. Smith, The Morning Morality Effect: The Influence of Time of Day on Unethical BehaviorPsychological Science, October 28, 2013,  doi: 10.1177/0956797613498099

Kouchacki is a post-doctoral research fellow at Harvard University and completed her doctoral studies at the University of Utah, where Smith is a current doctoral student. Kouchaki has been involved with a previous “priming” study about the effect of thinking about money on morality. And as is now well known, most “priming” studies are highly suspect.

It is not for nothing that the the APS journal Psychological Science is the highest ranked empirical journal in psychology.

The authors conducted experiments on college-age participants and on a sample of on-line participants:

  1. … college-age participants were shown various patterns of dots on a computer. For each pattern, they were asked to identify whether more dots were displayed on the left or right side of the screen. Importantly, participants were not given money for getting correct answers, but were instead given money based on which side of the screen they determined had more dots; they were paid 10 times the amount for selecting the right over the left. Participants therefore had a financial incentive to select the right, even if there were unmistakably more dots on the left, which would be a case of clear cheating.
  2. … also tested participants’ moral awareness in both the morning and afternoon by presenting them with word fragments such as “_ _RAL” and “E_ _ _ C_ _”

Their results showed that in line with their hypothesis, participants tested between 8:00 am and 12:00 pm were less likely to cheat than those tested between 12:00 pm and 6:00pm — a phenomenon the researchers call the “morning morality effect.” In the second experiment morning participants were more likely to form the words “moral” and “ethical,” whereas the afternoon participants tended to form the words “coral” and “effects,” lending further support to the morning morality effect.

Clearly the arduous field-work consisted of wandering around their dangerous college campus(es) soliciting subjects and then spending many long-nights on-line to get their “on-line” sample.

…. both undergraduate students and a sample of U.S. adults engaged in less unethical behavior (e.g., less lying and cheating) on tasks performed in the morning than on the same tasks performed in the afternoon. This morning morality effect was mediated by decreases in moral awareness and self-control in the afternoon. Furthermore, the effect of time of day on unethical behavior was found to be stronger for people with a lower propensity to morally disengage. These findings highlight a simple yet pervasive factor (i.e., the time of day) that has important implications for moral behavior.

Presumably a good afternoon nap could restore our moral behaviour in the evenings?

It seems to me that the hypothesis has been designed/invented primarily to grab headlines and to ensure publication.

Nature: “US behavioural researchers exaggerate findings”

August 27, 2013

The field of behaviour within social psychology has not covered itself with glory in recent times. The cases of Diedrik Stapel and Dirk Smeesters and Marc Hauser are all too recent. But I have the perception that the entire field – globally – has been subject to exaggerations and the actions of narcissists. I had not perceived it as being a particular issue just for the US. But I would not be surprised if the “publish or perish” pressure is stronger in the US than in many other countries.

But a  new study publsished in PNAS has ” found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters. Nonbehavioral studies showed no such “US effect” …”

Fanelli, D. & Ioannidis, J. P. A.,US studies may overestimate effect sizes in softer research,  Proc. Natl Acad. Sci. USA  (2013), doi.org/10.1073/pnas.1302997110

Nature reports

US behavioural researchers have been handed a dubious distinction — they are more likely than their colleagues in other parts of the world to exaggerate findings, according to a study published today.

The research highlights the importance of unconscious biases that might affect research integrity, says Brian Martinson, a social scientist at the HealthPartners Institute for Education and Research in Minneapolis, Minnesota, who was not involved with the study. “The take-home here is that the ‘bad guy/good guy’ narrative — the idea that we only need to worry about the monsters out there who are making up data — is naive,” Martinson says.



The study, published in Proceedings of the National Academy of Sciences, was conducted by John Ioannidis, a physician at Stanford University in California, and Daniele Fanelli, an evolutionary biologist at the University of Edinburgh, UK. The pair examined 82 meta-analyses in genetics and psychiatry that collectively combined results from 1,174 individual studies. The researchers compared meta-analyses of studies based on non-behavioural parameters, such as physiological measurements, to those based on behavioural parameters, such as progression of dementia or depression.



The researchers then determined how well the strength of an observed result or effect reported in a given study agreed with that of the meta-analysis in which the study was included. They found that, worldwide, behavioural studies were more likely than non-behavioural studies to report ‘extreme effects’ — findings that deviated from the overall effects reported by the meta-analyses.
 And US-based behavioural researchers were more likely than behavioural researchers elsewhere to report extreme effects that deviated in favour of their starting hypotheses.



“We might call this a ‘US effect,’” Fanelli says. “Researchers in the United States tend to report, on average, slightly stronger results than researchers based elsewhere.”

This ‘US effect’ did not occur in non-behavioral research, and studies with both behavioural and non-behavioural components exhibited slightly less of the effect than purely behavioural research. Fanelli and Ioannidis interpret this finding to mean that US researchers are more likely to report strong effects, and that this tendency is more likely to show up in behavioural research, because researchers in these fields have more flexibility to make different methodological choices that produce more diverse results.

The study looked at a larger volume of research than has been examined in previous studies on bias in behavioural research, says Brian Nosek, a psychologist at the University of Virginia in Charlottesville. ….. 

Abstract

Many biases affect scientific research, causing a waste of resources, posing a threat to human health, and hampering scientific progress. These problems are hypothesized to be worsened by lack of consensus on theories and methods, by selective publication processes, and by career systems too heavily oriented toward productivity, such as those adopted in the United States (US). Here, we extracted 1,174 primary outcomes appearing in 82 meta-analyses published in health-related biological and behavioral research sampled from the Web of Science categories Genetics & Heredity and Psychiatry and measured how individual results deviated from the overall summary effect size within their respective meta-analysis. We found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters. Nonbehavioral studies showed no such “US effect” and were subject mainly to sampling variance and small-study effects, which were stronger for non-US countries. Although this latter finding could be interpreted as a publication bias against non-US authors, the US effect observed in behavioral research is unlikely to be generated by editorial biases. Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.

Related: Retraction Watch

Made-up science: “Liking” or “disliking” in general is a personality trait!

August 27, 2013

This comes into the category – not of bad science – but what I would call “made-up science” where something fairly trivial and obvious is made sufficiently complicated to be addressed by “scientific method”.

It is apparently called “dispositional attitude” and it has a 16-item scale to measure an individual’s propensity to generally like or dislike any stimulii! This surprising and novel discovery expands attitude theory by demonstrating that an attitude is not simply a function of an object’s properties, but it is also a function of the properties of the individual who evaluates the object,”  So a “liker” likes everything and a “hater” hates everything!

“Dispositional Attitude” seems neither surprising nor so very novel. Not so very different from what has been called the “Observer Effect” in physics or the “actor-observer assymetry” in attribution theory. It is unnecessarily trying to complicate what is little more than a cliche. Beauty – or liking or hating – lies in the eye of  the beholder and if your personality wears rose-coloured glasses – surprise, surprise – everything appears red.

Justin Hepler & Dolores Albarracin, “Attitudes without objects: Evidence for a dispositional attitude, its measurement, and its consequences,”J Pers Soc Psychol. 2013 Jun;104(6):1060-76. doi: 10.1037/a0032282. Epub 2013 Apr 15.

The Annenberg School of Communication at the University of Pennsylvania has come out with this Press Release:

New research has uncovered the reason why some people seem to dislike everything while others seem to like everything. Apparently, it’s all part of our individual personality – a dimension that researchers have coined “dispositional attitude.”
            People with a positive dispositional attitude have a strong tendency to like things, whereas people with a negative dispositional attitude have a strong tendency to dislike things, according to research published in the Journal of Personality and Social Psychology. The journal article, “Attitudes without objects: Evidence for a dispositional attitude, its measurement, and its consequences,” was written by Justin Hepler, University of Illinois at Urbana-Champaign, and Dolores Albarracín, Ph.D., the Martin Fishbein Chair of Communication and Professor of Psychology at Penn.
            “The dispositional attitude construct represents a new perspective in which attitudes are not simply a function of the properties of the stimuli under consideration, but are also a function of the properties of the evaluator,” wrote the authors. “[For example], at first glance, it may not seem useful to know someone’s feelings about architecture when assessing their feelings about health care. After all, health care and architecture are independent stimuli with unique sets of properties, so attitudes toward these objects should also be independent.”
            However, they note, there is still one critical factor that an individual’s attitudes will have in common: the individual who formed the attitudes.  “Some people may simply be more prone to focusing on positive features and others on negative features,” Hepler said.  …..  
“This surprising and novel discovery expands attitude theory by demonstrating that an attitude is not simply a function of an object’s properties, but it is also a function of the properties of the individual who evaluates the object,” concluded Hepler and Albarracín. “Overall, the present research provides clear support for the dispositional attitude as a meaningful construct that has important implications for attitude theory and research.”
Abstract:
We hypothesized that individuals may differ in the dispositional tendency to have positive versus negative attitudes, a trait termed the Dispositional Attitude. Across four studies, we developed a 16-item Dispositional Attitude Measure (DAM) and investigated its internal consistency, test-retest reliability, factor structure, convergent validity, discriminant validity, and predictive validity. DAM scores were (a) positively correlated with positive affect traits, curiosity-related traits, and individual pre-existing attitudes, (b) negatively correlated with negative affect traits, and (c) uncorrelated with theoretically unrelated traits. Dispositional attitudes also significantly predicted the valence of novel attitudes while controlling for theoretically relevant traits (such as
the big-five and optimism). The dispositional attitude construct represents a new perspective in which attitudes are not simply a function of the properties of the stimuli under consideration, but are also a function of the properties of the evaluator. We discuss the intriguing implications of dispositional attitudes for many areas of research, including attitude formation, persuasion, and behavior prediction.

Social psychology may be rigorous but it is not a science

August 18, 2013

Scientific American carries an article by a budding psychologist who is upset that many don’t accept that it is a science – but I think she protests too much. I have no doubt that many social psychologists study their discipline with great rigour. And so they should. (And I accept the rigour of most of the researchers in this field notwthstanding the publicity seeking, high profile fraudsters such as Stapel and Hauser who did not).

But it is not any lack of rigour which makes psychology “not a science”. It is the fact that we just don’t know enough about the forces driving our sensory perceptions and our subsequent behaviour (via biochemistry in the body and the brain) to be able to formulate proper falsifiable hypotheses.  Behaviour is fascinating and many of the empirical studies trying to pin down the causes and effects are brilliantly conceived and carried out. But behaviour is complicated and we don’t know the drivers. Inevitably measurement is complicated and messy.

Even the alchemists made rigorous measurements. But they never knew enough to elevate alchemy to a science. And so it is with psychology and with social psychology in particular. We are waiting for the body of evidence to grow and the insight of a John Dalton and a Antoine Lavoisier to lift psychology from an alchemy-like art to the true level of a science.

Her article is interesting but a little too defensive. And she misses the point. Just having rigour in measurement is insufficient to make an art into a science.

Psychology’s brilliant, beautiful, scientific messiness

 Melanie Tannenbaum

Melanie TannenbaumMelanie Tannenbaum is a doctoral candidate in social psychology at the University of Illinois at Urbana-Champaign, where she received an M.A. in social psychology in 2011. Her research focuses on the science of persuasion & motivation regarding political, health-related, and environmental behavior.

Today, sitting down to my Twitter feed, I saw a new link to Dr. Alex Berezow’s old piece on why psychology cannot call itself a science. The piece itself is over a year old, but seeing it linked again today brought up old, angry feelings that I never had the chance to publicly address when the editorial was first published. Others, like Dave Nussbaum, have already done a good job of dismantling the critiques in this article, but the fact that people are still linking to this piece (and that other pieces, even elsewhere on the SciAm Network, are still echoing these same criticisms) means that one thing apparently cannot be said enough:

Psychology is a science.

Shut up about how it’s not, already.

But she gets it almost right in her last paragraph. Indeed psychology is still an art – but that is not additional to its being a science (by definition).

.. The thought, the creativity, the pure brilliance that goes into finding measurable, testable proxies for “fuzzy concepts” so we can experimentally control those indicators and find ways to step closer, every day, towards scientifically studying these abstractions that we once thought we would never be able to study — that’s beautiful. Quite frankly, it’s not just science — it’s an art. And often times, the means that scientists devise to help them step closer and closer towards approximating these abstract concepts, finding different facets to measure or different ways to conceptualize our thoughts, feelings, and behaviors? That process alone is so brilliant, so tricky, and so critical that it’s often worth receiving just as much press time as the findings themselves.

To keep psychology in the realms of art rather than science is not to demean the discipline or to attack the rigour of those working the field. And maybe psychologists should consider why they  get so upset at being called artists rather than scientists and why they wish to be perceived as something they are not.

There is much of the study of psychology which is brilliant and beautiful and messy – but it is not a science – yet.


%d bloggers like this: