Posts Tagged ‘Confirmation bias’

Förster (continued) – Linearity of data had a 1 in 508×10^18 probability of not being manipulated

May 1, 2014

The report from 2012 detailing the suspicions of manufactured data in 3 of Jens Förster’s papers has now become available. förster 2012 report – eng

The Abstract reads:

Here we analyze results from three recent papers (2009, 2011, 2012) by Dr. Jens Förster from the Psychology Department of the University of Amsterdam. These papers report 40 experiments involving a total of 2284 participants (2242 of which were undergraduates). We apply an F test based on descriptive statistics to test for linearity of means across three levels of the experimental design. Results show that in the vast majority of the 42 independent samples so analyzed, means are unusually close to a linear trend. Combined left-tailed probabilities are 0.000000008, 0.0000004, and 0.000000006, for the three papers, respectively. The combined left-tailed p-value of the entire set is p= 1.96 * 10-21, which corresponds to finding such consistent results (or more consistent results) in one out of 508 trillion (508,000,000,000,000,000,000). Such a level of linearity is extremely unlikely to have arisen from standard sampling. We also found overly consistent results across independent replications in two of the papers. As a control group, we analyze the linearity of results in 10 papers by other authors in the same area. These papers differ strongly from those by Dr. Förster in terms of linearity of effects and the effect sizes. We also note that none of the 2284 participants showed any missing data, dropped out during data collection, or expressed awareness of the deceit used in the experiment, which is atypical for psychological experiments. Combined these results cast serious doubt on the nature of the results reported by Dr. Förster and warrant an investigation of the source and nature of the data he presented in these and other papers.

Förster’s primary thesis in the 3 papers under suspicion is that the global versus local models for perception and processing of data which have been studied and applied for vision are also also valid and apply to the other senses.

1. Förster, J. (2009). Relations Between Perceptual and Conceptual Scope: How Global Versus Local Processing Fits a Focus on Similarity Versus Dissimilarity. Journal of Experimental Psychology: General, 138, 88-111.

2. Förster, J. (2011). Local and Global Cross-Modal Influences Between Vision and Hearing, Tasting, Smelling, or Touching. Journal of Experimental Psychology: General, 140, 364-389.

The University of Amsterdam investigation has called for the third paper to be retracted:

3. Förster, J. & Denzler, M. (2012). Sense Creative! The Impact of Global and Local Vision, Hearing, Touching, Tasting and Smelling on Creative and Analytic Thought.  Social Psychological and Personality Science, 3, 108-117 (The full paper is here: Social Psychological and Personality Science-2012-Förster-108-17 )

Abstract: Holistic (global) versus elemental (local) perception reflects a prominent distinction in psychology; however, so far it has almost entirely been examined in the domain of vision. Current work suggests that global/local processing styles operate across sensory modalities. .As for vision, it is assumed that global processing broadens mental categories in memory, enhancing creativity. Furthermore, local processing should support performance in analytic tasks. Throughout separate 12 studies, participants were asked to look at, listen to, touch, taste or smell details of objects, or to perceive them as wholes. Global processing increased category breadth and creative relative to analytic performance, whereas for local processing the opposite was true. Results suggest that the way we taste, smell, touch, listen to, or look at events affects complex cognition, reflecting procedural embodiment effects. 

My assumption is that if the data have been manipulated it is probably a case of “confirmation bias”.  Global versus local perception is not that easy to define or study for the senses other than vision – which is probably why they have not been studied. Therefore the data may have been “manufactured” to conform with the hypothesis that “the way we taste, smell, touch, listen to, or look at events does affect complex cognition and global processing increases category breadth and creativity relative to analytic performance, whereas local processing decreases them”. The hypothesis becomes the result.

Distinctions between global and local perceptions of hearing are not improbable. But for taste? and smell and touch?? My perception of the field of social psychology (which is still a long way from being a science) is that far too often improbable hypotheses are dreamed up for the effect they have (not least in the media). Data – nearly always by sampling groups of individuals – are then found/manipulated/created to “prove” the hypotheses rather than to disprove them.

My perceptions are not altered when I see results from paper 3 like these:

Our findings may have implications for our daily behaviors. Some objects or people in the real world may unconsciously affect our cognition by triggering global or local processing styles; while some may naturally guide our attention to salient details (e.g., a spot on a jacket, a strong scent of coriander in a soup), others may motivate us to focus on the gestalt (e.g., because they are balanced and no special features stand out). It might be the case then that differences in the composition of dishes, aromas, and other mundane events influence our behavior.We might for example attend more to the local details of the answers by an interview candidate if he wears a bright pink tie, or we may start to become more creative upon tasting a balanced wine. This is because our attention to details versus gestalts triggers different systems that process information in different ways.

The description of the methods used in the paper give me no sense of any scientific rigour –  especially those regarding smell – and I find the entire “experimental method” quite unconvincing.

Participants were seated in individual booths and were instructed to recognize materials by smelling them. A pretest reported in Förster (2011) led to the choice (of) tangerines, fresh soil, and chocolate, which were rated as easily recognizable and neutral to positive in valence (both when given as a mixture but also when given alone). After each trial, participants were asked to wait 1 minute before smelling the next sample. In Study 10a, in the global condition, participants were presented with three small bowls containing a mixture of all three components; whereas in the local condition, the participants were presented with three small bowls, each containing one of the three different ingredients. In the control condition, they had to smell two bowls of mixes and two bowls with pure ingredients (tangerines and soil) in random order.

A science it is certainly not.

The “Backfire Effect” and why Global Warmists ignore facts which contradict their opinions

October 21, 2013

This is about a study on how facts – especially corrective facts – are ignored when some opinion or perception is deeply held. The study is about political perceptions and it strikes me that it is very relevant to the IPCC and the alarmists for whom the Global Warming hypothesis (that man-made carbon dioxide emissions are the primary cause of Global warming) is a deeply held political belief.

Brendan Nyhan and Jason Reifler, When Corrections Fail: The persistence of political misperceptions

Abstract: We conducted four experiments in which subjects read mock news articles that included either a misleading claim from a politician, or a misleading claim and a correction. Results indicate that corrections frequently fail to reduce misperceptions among the targeted ideological group. We also document several instances of a “backfire effect” in which corrections actually increase misperceptions among the group in question.

The behaviour of the IPCC and the Global Warming coterie in ignoring or explaining away real observations in favour of their computer models has always smacked of religious fanaticism rather than scientific objectivity. They have shown a preference for coming up with ever more fanciful explanations about why their predictions are not panning out rather than accept that the basis of their predictions may be mistaken. The heat lurking in the deep oceans or Chinese pollution blocking out the sun or “old ice” declining invisibly while “new ice” increases have all been suggested as explanations for

  1. the recent lack of warming,
  2. the broken link between global temperature and carbon dioxide concentration, and
  3. increasing global ice extent.

It would seem that the global warming brigade are an “ideological sub-group” suffering from the “backfire effect”.

In this paper, we report the results of two rounds of experiments investigating the extent to which corrective information embedded in realistic news reports succeeds in reducing prominent misperceptions about contemporary politics. In each of the four experiments, which were conducted in fall 2005 and spring 2006, ideological subgroups failed to update their beliefs when presented with corrective information that runs counter to their predispositions. Indeed, in several cases, we find that corrections actually strengthened misperceptions among the most strongly committed subjects.

…. Political beliefs about controversial factual questions in politics are often closely linked with one’s ideological preferences or partisan beliefs. As such, we expect that the reactions we observe to corrective information will be influenced by those preferences. ……… Specifically, people tend to display bias in evaluating political arguments and evidence, favoring those that reinforce their existing views and disparaging those that contradict their views.

However, individuals who receive unwelcome information may not simply resist challenges to their views. Instead, they may come to support their original opinion even more strongly – what we call a “backfire effect.”

……

The backfire effects that we found seem to provide further support for the growing literature showing that citizens engage in “motivated reasoning.” While our experiments focused on assessing the effectiveness of corrections, the results show that direct factual contradictions can actually strengthen ideologically grounded factual beliefs – an empirical finding with important theoretical implications.

It is a little depressing that  just using facts (science) may not be of much use in getting people to correct their misperceptions when these take the form of religious belief.

Many citizens seem unwilling to revise their beliefs in the face of corrective information, and attempts to correct those mistaken beliefs may only make matters worse.

It is the sobering – and depressing – reality that facts (read science) are always subservient to even completely irrational religious beliefs.

Marc Hauser makes his comeback with “brain-training” for at-risk children

June 4, 2013

Marc Hauser who was terminated / resigned from Harvard for rather suspect data creation (the Hausergate affaire) is now making his comeback with a new enterprise called Risk-Eraser

Risk-Eraser transforms the learning and decision-making of at-risk children by building more effective programs. Our goal is to erase the risk in the lives of at-risk populations.

His program is touted as being evidence-based and involves critical thinking and “brain-training” to give a program which “helps students reach higher goals in both school and in their social lives, enables programs to run more efficiently, and empowers teachers to engage in the most exciting methods of pedagogy”. 

Google Maps: West Falmouth Hwy #376, W. Falmouth, MA, 02574

Some irony in his claim of being “evidence-based” and the line between “brain-training” and brain-washing is rather thin. Brain-washing – even in a good cause – and with vulnerable children would seem to raise a number of ethical issues.

Risk-Eraser, West Falmouth Hwy #376, W. Falmouth, MA, 02574​

Looks nice there.

Currently he is the only member of the team. A Technical team and an Advisory team are said to be “coming soon”.

Marc Hauser, PhD

I am the founder of Risk-Eraser. The company grew out of two passionate interests: to understand human nature and to improve the lives of those less fortunate.  My PhD is in the mind and brain sciences.  I was a professor at Harvard for 19 years.  I have published over 200 papers and six books. I have won several awards for my teaching, and am the proud mentor of some of the best students in my academic areas of interest; these individuals now hold distinguished professorships at major universities all over the world.

 His main transgression may have initially been due to confirmation bias and this may have led to the data “manipulation”.  I am quite sure that not everything Hauser did or does is tainted — but the real problem is that discerning what is or is not suspect is going to be difficult.

To implement any confirmation bias with “at-risk children” could I think be very destructive.  Applying “brain-washing” techniques on “at- risk” children seems itself not to be devoid of risk.

ORI finds misconduct by Marc Hauser in 4 NIH grants

September 6, 2012

Psychology is an academic discipline but it is not (yet) a science.

The Hausergate affaire followed by the Diedrik Stapel affaire only confirmed my view that psychology as an academic discipline is permeated by confirmation bias (and sometimes just plain fraud). Now the Marc Hauser affaire reaches some kind of a conclusion (at least until he has served his “sentence” and is then “rehabilitated”) with the Office of Research Integrity’s report.

Retraction Watch comments on the ORI report:

Two years after questions surfaced about work by former Harvard psychology professor Marc Hauser, an official government report is finally out.

It’s not pretty.

The findings by the Office of Research Integrity were first reported by the Boston Globe, which was also first to report the issues in Hauser’s work. They’re extensive, covering misconduct in four different NIH grants ……..

As I had posted at the end of last year, psychology as an academic discipline needs to start introducing some intellectual rigour:

That psychology is a discipline and a field of study is indisputable. That the study of human (or animal) behaviour is a worthy field and that experimentation and research are well worth pursuing is also obvious. But I am of the view that it is far from being a science.  Psychology can be considered to be a pre-science similar to alchemy. And the practitioners of psychology are similar to priests and shamans and witch-doctors and other practitioners of magic. Inevitably the field contains many charlatans.  …… In the various fields of psychology, the null hypothesis is rarely if ever brought into play. …..

…. As Paul Lutus so well puts it

…. psychology can make virtually any claim and offer any kind of therapy, because there is no practical likelihood of refutation – no clear criteria to invalidate a claim. This, in turn, is because human psychology is not a science, it is very largely a belief system similar to religion.

AGW – “a monopoly that clings to one hypothesis”

August 6, 2012

Michael Crichton (2003): There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.”

Matt Ridley’s 3rd article on confirmation bias in the Wall Street Journal:

I argued last week that the way to combat confirmation bias—the tendency to behave like a defense attorney rather than a judge when assessing a theory in science—is to avoid monopoly. So long as there are competing scientific centers, some will prick the bubbles of theory reinforcement in which other scientists live.

image

image : Wall Street Journal – John S. Dykes

Last month saw two media announcements of preliminary new papers on climate. One, by a team led by physicist Richard Muller of the University of California, Berkeley, concluded “the carbon dioxide curve gives a better match than anything else we’ve tried” for the (modest) 0.8 Celsius-degree rise in global average temperatures over land during the past half-century—less, if ocean is included. He may be right, but such curve-fitting reasoning is an example of confirmation bias. The other, by a team led by the meteorologist Anthony Watts, a skeptical gadfly, confirmed its view that the Muller team’s numbers are too high—because “reported 1979-2008 U.S. temperature trends are spuriously doubled” by bad thermometer siting and unjustified “adjustments.” …

…. The late novelist Michael Crichton, in his prescient 2003 lecture criticizing climate research, said: “To an outsider, the most significant innovation in the global-warming controversy is the overt reliance that is being placed on models…. No longer are models judged by how well they reproduce data from the real world—increasingly, models provide the data. As if they were themselves a reality.” ….

….. Bring on the gadflies.

The late Michael Crichton’s lecture in 2003 is well worth reading again.

Crichton’s lecture is here: Crichton 2003 Caltech Michelin Lecture

On “consensus science” he has this to say:

I want to pause here and talk about this notion of consensus, and the rise of what has been called consensus science. I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had.

Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.

There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.

In addition, let me remind you that the track record of the consensus is nothing to be proud of. Let’s review a few cases. ……

How to beat data into a hockey-stick…

June 11, 2012

When science leads to activism great things can be accomplished but when activism leads to “biased science” to justify the activism, we plumb the depths.

The Gergis affaire has some way to run as her activism-led science is revealed. ACM has preserved some of her activist writings on her now-disappeared blog :

(more…)

Another warming hockey stick is withdrawn/”put-on-hold” for bad data

June 9, 2012

One would think that after Climategate, climate scientists would be a little more careful with their “trickery”.

When a supposedly peer reviewed paper in the American Meteorological Society Journal  is withdrawn / “put on hold” after publication when the on-line community (Jean S / Steve McIntyre) find the authors to have cherry picked and improperly “massaged their data, it says 2 things:

  1. that the peer review process at the AMS is either incompetent or corrupt (in that it is especially friendly to papers propounding the global warming orthodoxy), and
  2. that the “tricks” revealed by Climategate are still being actively used by so-called climate scientists  to support their beliefs

That one of the authors – probably responsible for this cock-up – a Joelle Gergis from the University of Melbourne, is more an “activist” than a “scientist” does not help matters . Going through the abstracts of her list of publications suggests that she often decides on her conclusions first and then selects data and writes her papers to fit the conclusions. Cherry picking data is bad enough but when it is done because of confirmation bias it is perhaps the most insidious form of scientific misconduct there is.

Interestingly

joellegergis.wordpress.com is no longer available.

The authors have deleted this blog.

The AMS Journal “peers” who reviewed this paper don’t come out of this very well either. But of course they will receive no strictures for a job done badly.

Sources:

Gergis et al “Put on Hold”

American Meteorological Society disappears withdraws Gergis et al paper on proxy temperature reconstruction after post peer review finds fatal flaws

Gergis paper disappears

Another Hockey Stick broken

Another perversion of science: Confirmation bias in the name of global warming dogma is also scientific misconduct

January 25, 2011

A new paper has been published in Ecology Letters

Ran Nathan, Nir Horvitz, Yanping He, Anna Kuparinen, Frank M. Schurr, Gabriel G. Katul. Spread of North American wind-dispersed trees in future environmentsEcology Letters, 2011; DOI: 10.1111/j.1461-0248.2010.01573.

In this paper the authors have assumed that climate change will cause changes to CO2 concentration and wind speed. They have assumed also that increased CO2 will “increase fecundity and advance maturation”. They have then modelled the spread of 12 species as a function of wind speed.

So far so good – they have actually modelled only the effect of wind speed  which they assume will reduce due to climate change.

Their results basically showed no effect of wind speed:

“Future spread is predicted to be faster if atmospheric CO2 enrichment would increase fecundity and advance maturation, irrespective of the projected changes in mean surface windspeed”.

And now comes the perversion!

From their fundamental conclusion that wind speed has no effect and that therefore any CO2 increase resulting from climate change will enhance the spread of the trees, they invoke “expected” effects to deny what they have just shown:

“Yet, for only a few species, predicted wind-driven spread will match future climate changes, conditioned on seed abscission occurring only in strong winds and environmental conditions favouring high survival of the farthest-dispersed seeds. Because such conditions are unlikely, North American wind-dispersed trees are expected to lag behind the projected climate range shift.”

This final conclusion is based on absolutely nothing  and their modelling showed nothing and yet this paper was accepted for publication. I have no problem that a result showing “no effect of wind speed” be published but suspect that it needed the nonsense, speculative conclusion to comply with current dogma.

Science Daily then produces the headline: Climate Change Threatens Many Tree Species

when the reality is

This study Shows No Effect of Wind Speed But Yet We Believe that Climate Change Threatens Many Tree Species

“Our research indicates that the natural wind-driven spread of many species of trees will increase, but will occur at a significantly lower pace than that which will be required to cope with the changes in surface temperature,” said Prof. Nathan. “This will raise extinction risk of many tree populations because they will not be able to track the shift in their natural habitats which currently supply them with favorable conditions for establishment and reproduction. As a result, the composition of different tree species in future forests is expected to change and their areas might be reduced, the goods and services that these forests provide for man might be harmed, and wide-ranging steps will have to be taken to ensure seed dispersal in a controlled, directed manner.”

Whether the perversion is by the authors themselves anticipating what is needed to get a paper published or whether it is due to pressure from the Journal Ecology Letters or by their referees is unclear.

Abstract:

Despite ample research, understanding plant spread and predicting their ability to track projected climate changes remain a formidable challenge to be confronted. We modelled the spread of North American wind-dispersed trees in current and future (c. 2060) conditions, accounting for variation in 10 key dispersal, demographic and environmental factors affecting population spread. Predicted spread rates vary substantially among 12 study species, primarily due to inter-specific variation in maturation age, fecundity and seed terminal velocity. Future spread is predicted to be faster if atmospheric CO2 enrichment would increase fecundity and advance maturation, irrespective of the projected changes in mean surface windspeed. Yet, for only a few species, predicted wind-driven spread will match future climate changes, conditioned on seed abscission occurring only in strong winds and environmental conditions favouring high survival of the farthest-dispersed seeds. Because such conditions are unlikely, North American wind-dispersed trees are expected to lag behind the projected climate range shift.

In essence this paper is only based on belief and the results actually obtained are denied. It seems to me that denying or twisting or “moulding” results actually obtained to fit pre-conceived notions is not just a case of confirmation bias but comes very close to scientific misconduct.

Hausergate: In scientific misconduct “confirmation bias” or “fudging data” are equally corrupt

January 2, 2011

The Scientific American carries an article about the Marc Hauser case at Harvard. (Marc Hauser was found to have committed 8 cases of scientific misconduct).

Scientific American

Scott O. Lilienfeld argues that Hauser may only be guilty of “confirmation bias” and that it is premature to ascribe deliberate wrongdoing to him:

Hauser has admitted to committing “significant mistakes.” In observing the reactions of my colleagues to Hauser’s shocking comeuppance, I have been surprised at how many assume reflexively that his misbehavior must have been deliberate. For example, University of Maryland physicist Robert L. Park wrote in a Web column that Hauser “fudged his experiments.” I don’t think we can be so sure. It’s entirely possible that Hauser was swayed by “confirmation bias”—the tendency to look for and perceive evidence consistent with our hypotheses and to deny, dismiss or distort evidence that is not.

The past few decades of research in cognitive, social and clinical psychology suggest that confirmation bias may be far more common than most of us realize. Even the best and the brightest scientists can be swayed by it, especially when they are deeply invested in their own hypotheses and the data are ambiguous. A baseball manager doesn’t argue with the umpire when the call is clear-cut—only when it is close.

Scholars in the behavioral sciences, including psychology and animal behavior, may be especially prone to bias. They often make close calls about data that are open to many interpretations…….

………. Two factors make combating confirmation bias an uphill battle. For one, data show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant. Second, the mounting pressure on scholars to conduct single-hypothesis-driven research programs supported by huge federal grants is a recipe for trouble. Many scientists are highly motivated to disregard or selectively reinterpret negative results that could doom their careers.

But I am not persuaded. When “eminent” scientists use their position and power to indulge in “confirmation bias” it is merely a euphemism for what is still cheating by taking undue advantage of their position. It is “corruption” in its most basic form. I reject the notion that such “confirmation bias” is a form of  “unwitting behaviour”. It may well be behaviour which resides in the sub-conscious but that is not “unwitting” behaviour. Neither is it excusable just because it may be in the sub-conscious. It gets into the sub-conscious only because the conscious allows it to do so. When any behaviour residing in the sub-conscious conflicts with the values and morality of an individual it is inevitably ejected into the conscious.  Being sub-consciously immoral but consciously moral is not feasible.

In the case of Marc Hauser, even assuming that his faults were due to “confirmation bias” then either it was behaviour which remained entirely in the sub-conscious in which case his values and morality are suspect, or it was triggered into the conscious and he continued anyway in which case it was simple cheating.


%d bloggers like this: