Posts Tagged ‘Science’

Without first having religions, atheism and agnosticism cannot exist

June 27, 2017

I take science to be the process by which areas of ignorance are explored, illuminated and then shifted into our space of knowledge. One can believe that the scientific method is powerful enough to answer all questions – eventually – by the use of our cognitive abilities. But it is nonsense to believe that science is, in itself, the answer to all questions. As the perimeter surrounding human knowledge increases, what we know that we don’t know, also increases. There is what we know and at the perimeter of what we know, lies what we don’t know. Beyond that lies the boundless space of ignorance where we don’t know what we don’t know.

Religions generally use a belief in the concept of a god (or gods) as their central tenet. By definition this is within the space of ignorance (which is where all belief lives). For some individuals the belief may be so strong that they claim it to be “personal knowledge” rather than a belief. It remains a belief though, since it cannot be proven. Buddhism takes a belief in gods to be unnecessary but – also within the space of ignorance – believes in rebirth (not reincarnation) and the “infinite” (nirvana). Atheism is just as much in the space of ignorance since it is based on the beliefs that no gods or deities or the supernatural do exist. Such beliefs can only come into being as a reaction to others having a belief in gods or deities or the supernatural. But denial of a non-belief cannot rationally be meaningful. If religions and their belief in gods or the supernatural did not first exist, atheism would be meaningless. Atheism merely replaces a belief in a God to a belief in a Not-God.

I take the blind worship of “science” also to be a religion in the space of ignorance. All physicists and cosmologists who believe in the Big Bang singularity, effectively believe in an incomprehensible and unexplainable Creation Event. Physicists who believe in dark matter or dark energy, as mysterious things, vested with just the right properties to bring their theories into compliance with observations of an apparently expanding universe, are effectively invoking magic. When modern physics claims that there are 57 fundamental particles but has no explanation as to why there should be just 57 (for now) or 59 or 107 fundamental particles, they take recourse to magical events at the beginning of time. Why there should be four fundamental forces in our universe (magnetism, gravitation, strong force and weak force), and not two or three or seven is also unknown and magical.

Agnosticism is just a reaction to the belief in gods. Whereas atheists deny the belief, agnostics merely state that such beliefs can neither be proved or disproved; that the existence of gods or the supernatural is unknowable. But by recognising limits to what humans can know, agnosticism inherently accepts that understanding the universe lies on a “higher” dimension than what human intelligence and cognitive abilities can cope with. That is tantamount to a belief in “magic” where “magic” covers all things that happen or exist but which we cannot explain. Where atheism denies the answers of others, agnosticism declines to address the questions.

The Big Bang singularity, God(s), Nirvana and the names of all the various deities are all merely labels for things we don’t know in the space of what we don’t know, that we don’t know. They are all labels for different kinds of magic.

I am not sure where that leaves me. I follow no religion. I believe in the scientific method as a process but find the “religion of science” too self-righteous and too glib about its own beliefs in the space of ignorance. I find atheism is mentally lazy and too negative. It is just a denial of the beliefs of others. It does not itself address the unanswerable questions. It merely tears down the unsatisfactory answers of others. Agnosticism is a cop-out. It satisfies itself by saying the questions are too hard for us to ever answer and it is not worthwhile to try.

I suppose I just believe in Magic – but that too is just a label in the space of ignorance.


Science needs some scienticians

June 18, 2014

Physic gave rise to physicians long before physics was practiced by a physicist,

Mathematics gives mathematicians, but who would trust a mathematist. 

A practitioner of an “ology” has an honourable profession,

So biologistsoncologists, archaeologists and geologists can be numbered by the million. 

Without the richness of an “ist” modern politics would be barren,

politicist has a murky trade but he is not a politician

We have leftists and rightists and socialists and you can even find some libertarians,

But for all the mayhem in the world, you will not find any extremians.

Environmentalists and conservationists are politically very fashionable,

But their devious methods have now become – rather questionable. 

Philosophy was where it started but we rarely refer to philosophists,

And many of the scientists of today are little more than sophists. 

It was only in 1840 that scientists were one of Whewell’s inventions,

But they are now two-a-penny, and we could do with a few scienticians.

It should be quite clear that I think that there are far too many who claim to be scientists though they do no science. It then becomes useful to distinguish the real scienticians from the rabble. And perhaps the same could apply to the real economians among the multitude of clerks who call themselves economists.

Number of citations and excellence in science

February 10, 2014

Scientific excellence can only truly be judged by history. But history has eyes only for impact and if excellent science causes no great change to science orthodoxy, it is soon forgotten. For a scientist the judgements of history long after he performs his science are of no real significance. Even where academic freedom is the main motivator for the scientist,  the degrees of freedom available are related to academic success. An academic or scientific career depends increasingly on contemporaneus judgements – and here social networking, peer review and bibliometric factors are decisive. There may well be some correlation between academic success and the “goodness” of the scientist but it is not the success or the bibliometrics which are causative.

As Lars Walloe puts it: Walloe-on-Exellence

In the evaluation process many scientists and nearly all university and research council administrators love all kind of bibliometric tools. This has of course a simple explanation. The “bureaucracy” likes to have a simple quantitative tool, which can be used with the aid of a computer and the internet to give an “objective” measure of excellence. However, excellence is not directly related either to the impact factor of the journal in which the work is published, or to the number of citations, or to the number of papers published, or even to some other more sophisticated bibliometric indices. Of course there is some correlation, but it is in my judgement weaker than what many would like to believe, and uncritical use of these tools easily leads to wrong conclusions. For instance the impact factor of a journal is mainly determined by the very best papers published in it and not so much by the many ordinary papers published. We know well that even in high impact factor journals like Science and Nature or high impact journals in more specialized fields, from time to time not so excellent papers are being published. 

…..  I often meet scientists for whom to obtain high bibliometric factors serve as a prime guidance in their work. Too many of them are really not that good, but were just lucky or work in a field where it was easier to get many citations. …..If you are working with established methods in a popular field you can be fairly sure to get your papers published. I can mention in details some medical fields were I know that this has happened or is happening today. The scientists in such fields get a high number of publications and citations, but the research is not necessarily excellent. 

And getting your paper published has now become so important in the advancement of an academic career that journals are proliferating. Many of the new journals have now shifted their business models to be based on author’s fees and not on volume of readership. This is a very “safe” business model since profits are ensured before the journal has even been published and if the journal is an on-line journal then costs are minimal. It is virtually the “self-publishing” of papers. You pay your money and get your paper published.

The reality today is that more papers are being published by more authors in more journals than ever before. But fewer are being actually read. Papers are cited without having been read – let alone understood.

Skeptical Scalpel:

Another reason could be that publishers, particularly those who charge authors fees for publishing, are in the business of making money.

Authoring journal articles is not only enhancing to one’s CV (the old “publish or perish” cliché), it is required by Residency Review Committees as evidence of “scholarly activity” in training programs. Maybe it’s good for attracting referrals too.

The publish or perish ethos has led to a proliferation of the number of authors per paper!

First noted in 1993 by a paper in Acta Radiologica and a letter in the BMJ, the number of authors per paper has risen dramatically over the years. 
study of 12 radiology journals found the number of authors per paper doubled from 2.2 in 1966 to 4.4 in 1991.  A review of Neurosurgery and the Journal of Neurosurgery spanned 50 years. the average went from 1.8 authors per article in 1945 to 4.6 authors in 1995. 
Of note, the above two articles were each written by a single author. 
Three psychiatrists from Dartmouth analyzed original scientific articles in four of the most prestigious journals in the United States—Archives of Internal Medicine, Annals of Internal Medicine, Journal of the American Medical Association, and the New England Journal of Medicine—from 1980 to 2000. They found that the mean number of authors per paper increased from 4.5 to 6.9. The same is true for two plastic surgery journals, which saw the average number of authors go from 1.4 to 4.0 and 1.7 to 4.2 in the 50 years from 1955 to 2005. The number of single-author papers went from 78% to 3% in one journal and 51% to 8% another.
In orthopedics, a 
review of the American and British versions of the Journal of Bone and Joint Surgery for 60 years from 1949 to 2009 showed an increase of authors per paper from 1.6 to 5.1.
An impressive  rise in the number of authors took place in two leading thoracic surgery
 journals. For the Journal of Thoracic and Cardiovascular Surgery the increase was 1.4  in 1936 to 7.5 2006 and for Annals of Thoracic Surgery it was 3.1 in 1966 to 6.8 in 2006. 

And the winner is a paper with 3171 authors! Needles to say it comes from Big Science and the Large Hadron Collider:

the paper with the most authors is “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC” in a journal called “Physics Letters B” with 3171. The list of authors takes up 9 full pages.

Too many journals, too many papers, too many authors and too many citations. But that does not mean there is more excellence in science.

Numeracy and language

December 2, 2013

I tend towards considering mathematics a language rather than a science. In fact mathematics is more like a family of languages each with a rigorous grammar. I like this quote:

R. L. E. SchwarzenbergerThe Language of Geometry, in A Mathematical Spectrum Miscellany, Applied Probability Trust, 2000, p. 112:

My own attitude, which I share with many of my colleagues, is simply that mathematics is a language. Like English, or Latin, or Chinese, there are certain concepts for which mathematics is particularly well suited: it would be as foolish to attempt to write a love poem in the language of mathematics as to prove the Fundamental Theorem of Algebra using the English language.

Just as conventional languages enable culture and provide a tool for social communication, the various languages of mathematics, I think, enable science and provide a tool for scientific discourse. I take “science” here to be analaogous to a “culture”. To follow that thought then, just as science is embedded within a “larger” culture, so is mathematics embedded within conventional languages. This embedding shows up as the ability of a language to deal with numeracy and numerical concepts.

And that means then the value judgement of what is “primitive” when applied to language can depend upon the extent to which mathematics and therefore numeracy is embedded within that language.

GeoCurrents examines numeracy embedded within languages:

According to a recent article by Mike Vuolo in, Pirahã is among “only a few documented cases” of languages that almost completely lack of numbers. Dan Everett, a renowned expert in the Pirahã language, further claims that the lack of numeracy is just one of many linguistic deficiencies of this language, which he relates to gaps in the Pirahã culture. ….. 

The various types of number systems are considered in the article on Numeral Bases, written by Bernard Comrie. Of the 196 languages in the sample, 88% can handle an infinite set of numerals. To do so, languages use some arithmetic base to construct numeral expressions. According to Comrie, “we live in a decimal world”: two thirds of the world’s languages use base 10 and such languages are spoken “in nearly every part of the world”. English, Russian, and Mandarin are three examples of such languages. ….. 

Around 20% of the world’s languages use either purely vigesimal (or base 20) or a hybrid vigesimal-decimal system. In a purely vigesimal system, the base is consistently 20, yielding the general formula for constructing numerals as x20 + y. For example, in Diola-Fogny, a Niger-Congo language spoken in Senegal, 51 is expressed as bukan ku-gaba di uɲɛn di b-əkɔn ‘two twenties and eleven’. Other languages with a purely vigesimal system include Arawak spoken in Suriname, Chukchi spoken in the Russian Far East, Yimas in Papua New Guinea, and Tamang in Nepal. In a hybrid vigesimal-decimal system, numbers up to 99 use base 20, but the system then shifts to being decimal for the expression of the hundreds, so that one ends up with expressions of the type x100 + y20 + z. A good example of such a system is Basque, where 256 is expressed as berr-eun eta berr-ogei-ta-hama-sei ‘two hundred and two-twenty-and-ten-six’. Other hybrid vigesimal-decimal systems are found in Abkhaz in the Caucasus, Burushaski in northern Pakistan, Fulfulde in West Africa, Jakaltek in Guatemala, and Greenlandic. In a few mostly decimal languages, moreover, a small proportion of the overall numerical system is vigesimal. In French, for example, numerals in the range 80-99 have a vigesimal structure: 97 is thus expressed as quatre-vingt-dix-sept ‘four-twenty-ten-seven’. Only five languages in the WALS sample use a base that is neither 10 nor 20. For instance, Ekari, a Trans-New Guinean language spoken in Indonesian Papua uses base of 60, as did the ancient Near Eastern language Sumerian, which has bequeathed to us our system of counting seconds and minutes. Besides Ekari, non-10-non-20-base languages include Embera Chami in Colombia, Ngiti in Democratic Republic of Congo, Supyire in Mali, and Tommo So in Mali. …… 

Going back to the various types of counting, some languages use a restricted system that does not effectively go above around 20, and some languages are even more limited, as is the case in Pirahã. The WALS sample contains 20 such languages, all but one of which are spoken in either Australia, highland New Guinea, or Amazonia. The one such language found outside these areas is !Xóõ, a Khoisan language spoken in Botswana. ……. 

Read the whole article. 

Counting monkey?

In some societies in the ancient past, numeracy did not contribute significantly to survival as probably with isolated tribes like the Pirahã. But in most human societies, numeracy was of significant benefit especially for cooperation between different bands of humans. I suspect that it was the need for social cooperation which fed the need for communication within a tribe and among tribes, which in turn was the spur to the development of language, perhaps over 100,000 years ago. What instigated the need to count is in the realm of speculation. The need for a calendar would only have developed with the development of agriculture. But the need for counting herds probably came earlier in a semi-nomadic phase. Even earlier than that would have come the need to trade with other hunter gatherer groups and that  probably gave rise to counting 50,000 years ago or even earlier. The tribes who learned to trade and developed the ability and concepts of trading were probably the tribes that had the best prospects of surviving while moving from one territory to another. It could be that the ability to trade was an indicator of how far a group could move.

And so I am inclined to think that numeracy in language became a critical factor which 30,000 to 50,000 years ago determined the groups which survived and prospered. It may well be that it is these tribes which developed numbers, and learned to count, and learned to trade that eventually populated most of the globe. It may be a little far-fetched but not impossible that numeracy in language may have been one of the features distinguishing Anatomically Modern Humans from Neanderthals. Even though the Neanderthals had larger brains and that we are all Neanderthals to some extent!

Science is losing its ability to self-correct

October 20, 2013

With the explosion in the number of researchers, the increasing rush to publication and the corresponding explosion in traditional and on-line journals as avenues of publication, The Economist carries an interesting article making the point that the assumption that science is self-correcting is under extreme pressure. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.”

The field of psychology and especially social psychology has been much in the news with the dangers of “priming”.

“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.

Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.

It is not just “soft” fields which have problems. It is apparent that in medicine a large number of published results cannot be replicated

… irreproducibility is much more widespread. A few years ago scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. According to a piece they wrote last year in Nature, a leading scientific journal, they were able to reproduce the original results in just six. Months earlier Florian Prinz and his colleagues at Bayer HealthCare, a German pharmaceutical giant, reported in Nature Reviews Drug Discovery, a sister journal, that they had successfully reproduced the published results in just a quarter of 67 seminal studies.

The governments of the OECD, a club of mostly rich countries, spent $59 billion on biomedical research in 2012, nearly double the figure in 2000. One of the justifications for this is that basic-science results provided by governments form the basis for private drug-development work. If companies cannot rely on academic research, that reasoning breaks down. When an official at America’s National Institutes of Health (NIH) reckons, despairingly, that researchers would find it hard to reproduce at least three-quarters of all published biomedical findings, the public part of the process seems to have failed.

It is not just that research results cannot be replicated. So much of what is published is just plain wrong and the belief that science is self-correcting is itself under pressure

Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think. …… Statistical mistakes are widespread. The peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate. Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.” 

…… In 2005 John Ioannidis, an epidemiologist from Stanford University, caused a stir with a paper showing why, as a matter of statistical logic, the idea that only one such paper in 20 gives a false-positive result was hugely optimistic. Instead, he argued, “most published research findings are probably false.” 

The tendency to only publish positive results leads also to statistics being skewed to allow results to be shown as being poitive

The negative results are much more trustworthy; …….. But researchers and the journals in which they publish are not very interested in negative results. They prefer to accentuate the positive, and thus the error-prone. Negative results account for just 10-30% of published scientific literature, depending on the discipline. This bias may be growing. A study of 4,600 papers from across the sciences conducted by Daniele Fanelli of the University of Edinburgh found that the proportion of negative results dropped from 30% to 14% between 1990 and 2007. Lesley Yellowlees, president of Britain’s Royal Society of Chemistry, has published more than 100 papers. She remembers only one that reported a negative result.

…. Other data-heavy disciplines face similar challenges. Models which can be “tuned” in many different ways give researchers more scope to perceive a pattern where none exists. According to some estimates, three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”

The idea of peer-review being some kind of a quality check of the results being published is grossly optimistic

The idea that there are a lot of uncorrected flaws in published studies may seem hard to square with the fact that almost all of them will have been through peer-review. This sort of scrutiny by disinterested experts—acting out of a sense of professional obligation, rather than for pay—is often said to make the scientific literature particularly reliable. In practice it is poor at detecting many types of error.

John Bohannon, a biologist at Harvard, recently submitted a pseudonymous paper on the effects of a chemical derived from lichen on cancer cells to 304 journals describing themselves as using peer review. An unusual move; but it was an unusual paper, concocted wholesale and stuffed with clangers in study design, analysis and interpretation of results. Receiving this dog’s dinner from a fictitious researcher at a made up university, 157 of the journals accepted it for publication. ….

……. As well as not spotting things they ought to spot, there is a lot that peer reviewers do not even try to check. They do not typically re-analyse the data presented from scratch, contenting themselves with a sense that the authors’ analysis is properly conceived. And they cannot be expected to spot deliberate falsifications if they are carried out with a modicum of subtlety.

Fraud is very likely second to incompetence in generating erroneous results, though it is hard to tell for certain. 

And then there is the issue that all results from Big Science can never be replicated because the cost of the initial work is so high. Medical research or clinical trials are also extremely expensive. Journals have no great interest to publish replications (even when they are negative). And then, to compound the issue, those who provide funding are less likely to extend funding merely for replication or for negative results.

People who pay for science, though, do not seem seized by a desire for improvement in this area. Helga Nowotny, president of the European Research Council, says proposals for replication studies “in all likelihood would be turned down” because of the agency’s focus on pioneering work. James Ulvestad, who heads the division of astronomical sciences at America’s National Science Foundation, says the independent “merit panels” that make grant decisions “tend not to put research that seeks to reproduce previous results at or near the top of their priority lists”. Douglas Kell of Research Councils UK, which oversees Britain’s publicly funded research argues that current procedures do at least tackle the problem of bias towards positive results: “If you do the experiment and find nothing, the grant will nonetheless be judged more highly if you publish.” 

Trouble at the lab 

The rubbish will only decline when there is a cost to publishing shoddy work which outweighs the gains of adding to a researcher’s list of publications. At some point researchers will need to be held liable and accountable for their products (their publications). Not just for fraud or misconduct but even for negligence or gross negligence when they do not carry out their work using the best available practices of the field. These are standards that some (but not all) professionals are held to and there should be no academic researcher who is not also subject to such a standard. If peer-review is to recover some of its lost credibility then anonymous reviews must disappear and reviewers must be much more explicit about what they have checked and what they have not.

Fluid jets and fishbones

August 9, 2013

Just a few examples from a striking gallery of pictures by  John W. M. Bush (MIT Mathematics)

Colliding jets and the patterns that ensue.

Fishbone john bush

Fishbone john bush

fluids john bush

colliding jets john bush

We examine the form of the free surface flows resulting from the collision of equal jets at an oblique angle. Glycerol-water solutions with viscosities of 15-50 cS were pumped at flow rates of 10-40 cc/s through circular outlets with diameter 2 mm. … At low flow rates, the resulting stream takes the form of a steady fluid chain, a succession of mutually orthogonal fluid links, each comprised of a thin oval sheet bound by relatively thick fluid rims. The influence of viscosity serves to decrease the size of successive links, and the chain ultimately coalesces into a cylindrical stream. As the flow rate is increased, waves are excited on the sheet, and the fluid rims become unstable.  The rim appears blurred to the naked eye; however, strobe illumination reveals a remarkably regular and striking flow instability. Droplets form from the sheet rims but remain attached to the fluid sheet by tendrils of fluid that thin and eventually break. The resulting flow takes the form of fluid fishbones, with the fluid sheet being the fish head and the tendrils its bones. Increasing the flow rate serves to broaden the fishbones.  In the wake of the fluid fish, a regular array of drops obtains, the number and spacing of which is determined by the pinch-off of the fishbones. 

h/t Science is Beauty


Noted in passing 21st July 2013

July 21, 2013
map projections galore

Map Projections Galore

More on cartography and map projections.

The linguistic forensics which unmasked JK Rowling as the mystery author Robert Galbraith.

The drop of tar pitch finally fell after 69 years.

Singing in unison in a choir leads to heart beats being synchronised.

The Indian monsoon is almost half-over and rainfall is running 16% above the long term average. In spite of the floods in Uttrakhand this monsoon will probably be classified as a “good” monsoon.

A Viking trading post,  Steinkjer, mentioned in the Norse sagas and dating from 1000 years ago has probably been identified.

The evidence is mounting that there was a pre-Toba expansion Out of Africa and into Asia around 90-100,000 years ago followed by another post-Toba expansion which then went all the way to Australia. The second wave would have mixed with the first wave survivors of the Toba eruption who were probably the first AMH to intermingle with the Denisovans.

The shale gas bonanza continues in the UK and the advantages are being pushed hard even by Bjorn Lomborg.

Climate Models and Pepsodent

June 10, 2013

You’ll wonder where the warming went

when you brush your models with excrement

                                                                                                                                         with apologies to Pepsodent

Climate models just aren’t good enough – yet.

As real observations increasingly diverge from model results, the global warming establishment is reacting in 2 ways:

  1. Denial by the Warmist orthodoxy who prefer model results to real data , and
  2. Real scientists who have begun to questions the assumptions on which these models are based.

Two articles have recently been published in the mainstream scientific literature which question climate models.

1. What Are Climate Models Missing?Bjorn Stevens and Sandrine BonyScience, 31 May 2013, Vol. 340 no. 6136 pp. 1053-1054 , DOI: 10.1126/science.1237554

Abstract: Fifty years ago, Joseph Smagorinsky published a landmark paper (1) describing numerical experiments using the primitive equations (a set of fluid equations that describe global atmospheric flows). In so doing, he introduced what later became known as a General Circulation Model (GCM). GCMs have come to provide a compelling framework for coupling the atmospheric circulation to a great variety of processes. Although early GCMs could only consider a small subset of these processes, it was widely appreciated that a more comprehensive treatment was necessary to adequately represent the drivers of the circulation. But how comprehensive this treatment must be was unclear and, as Smagorinsky realized (2), could only be determined through numerical experimentation. These types of experiments have since shown that an adequate description of basic processes like cloud formation, moist convection, and mixing is what climate models miss most.

2. Emerging selection bias in large-scale climate change simulations, Kyle L. Swanson, Geophysical Research Letters, online 16th May 2013, DOI: 10.1002/grl.50562

Abstract: Climate change simulations are the output of enormously complicated models containing resolved and parameterized physical processes ranging in scale from microns to the size of the Earth itself. Given this complexity, the application of subjective criteria in model development is inevitable. Here we show one danger of the use of such criteria in the construction of these simulations, namely the apparent emergence of a selection bias between generations of these simulations. Earlier generation ensembles of model simulations are shown to possess sufficient diversity to capture recent observed shifts in both the mean surface air temperature as well as the frequency of extreme monthly mean temperature events due to climate warming. However, current generation ensembles of model simulations are statistically inconsistent with these observed shifts, despite a marked reduction in the spread among ensemble members that by itself suggests convergence towards some common solution. This convergence indicates the possibility of a selection bias based upon warming rate. It is hypothesized that this bias is driven by the desire to more accurately capture the observed recent acceleration of warming in the Arctic and corresponding decline in Arctic sea ice. However, this convergence is difficult to justify given the significant and widening discrepancy between the modeled and observed warming rates outside of the Arctic.


Perceptions of beauty

June 4, 2013

Science is Beauty

Science is Beauty

Perceptions of Beauty from  Philosophy of Beauty (Department of PhilosophyUniversity of Maryland)

Symmetry and asymmetry


Description: U:\newwebsite\Btynotes\Labrets.jpg

Labrets in tribal societies: are they considered beautiful? If not, why wear them?

Questioning global warming dogma is taboo in Belgium

April 15, 2013

Reproduced from The GWPF  because “questioning the impact of mankind on climate change is evidently still a taboo in the French-speaking world”. 

The authors of this paper recently presented their views on climate science at the Royal Academy of Belgium. No French or Belgian newspaper was willing to publish their assessment. Questioning the impact of mankind on climate change is evidently still a taboo in the French-speaking world.

Double Standards in Climate Change

István E. Markó a), Alain Préat b), Henri Masson c) and Samuel Furfari d)

a) Professor at the Université catholique de Louvain (UCL)

b) Professor at the Université libre de Bruxelles (ULB)

c) Professor at Maastricht University

d) Lecturer at the Université libre de Bruxelles (ULB)

The conference on climate change held in Doha (Qatar) last December ended in failure once again. However, the news reported in the media about this 18th conference on climate change were fully in line with the well-rehearsed mantra: the Earth is warming up, human emissions of greenhouse gases are mainly to blame for this warming up, and we are approaching disaster. We have only one climate, but communication about it seems to be plagued by double standards.

For a few years now British, American, Italian or German media have given sceptical scientists the opportunity to express their opinions on the validity of the statements released by the Intergovernmental Panel on Climate Change (IPCC), the organisation responsible for the official line of thought on climate matters. Nothing like that has been seen in the French or Belgian media which persist in portraying scientific sceptics, at best as sold out to the oil lobbies, at worst as troubled individuals, greedy for public recognition and fame and as such not worthy to be proponents in a serious debate.

The authors of this contribution were recently been granted the honour of presenting their point of view as climate sceptics at the Royal Academy of Belgium. During a series of six well-attended lectures we showed, among other things, that:

  1. The climate has always changed. This was true during ancient times and it has also been true since the beginning of the modern era. These climate changes have always been, and still are, independent of the concentration of CO2 in the atmosphere;
  2. During Roman times and the Middle Ages temperatures were observed well in excess of those currently experienced. From the 16th till the 19th century a cold period referred to as the “Little Ice Age” predominated. All these changes took place without mankind being held responsible. We believe that the increase in temperatures that occurred during a certain part of the 20th century is the result of a recovery from this cold period. These various events can be explained by a combination of warm and cold cycles of different magnitudes and duration. Why and how this happens is not yet fully understood, but some plausible explanations can be put forward;
  3. The so-called “abnormally rapid” increase in global temperatures between 1980 and 2000 is not unusual at all. There have in fact been several such periods in the past, during which temperatures rose in a similar manner and at comparable rates, even though fossil fuels were not yet in use;
  4. Temperature measurements do not necessarily correlate with a building up or a decrease in heat since heat variations are energy changes subject to thermal inertia. Apart from heat many other parameters have an influence on temperature. Moreover the measurement of temperatures is subject to numerous large errors. When the magnitude and plurality of these measurement errors are taken into account, the reported increase in temperatures is no longer statistically significant;
  5. The famous “Hockey-stick” curve, known as the Mann’s curve and presented six times by the IPCC in its penultimate report, is the result among other things of a mistake in the statistical calculations and an incorrect choice of temperature indicators, i.e. proxies. This lack of scientific rigour has totally discredited the curve and it was withdrawn, without any explanation, from subsequent IPCC reports;
  6. Even though they look formidably complex, the theoretical models employed by the climate modellers are simplified to the extreme. In fact there are far too many (known and unknown) parameters that influence climate change. At the moment it is impossible to take them all into account. The climate system is extremely complex, containing not only chaotic components but also numerous positive and negative feedback loops operating according to various different time scales. Which is why the IPCC wrote in its reports that: “…long-term prediction of future climate states is not possible” (page 774, Third report). This is very true. To this day all the climate predictions based upon these models have turned out to be totally incorrect. Strangely, nobody seems to care;
  7. The relationship between CO2 and temperature, obtained from the Vostok ice cores, shows that a building up of CO2 occurs 800 to 1000 years after an increase in temperature is observed. Hence the increase in the concentration of CO2 is a consequence of the warming of the climate, not its cause;
  8. But the coup de grâce to the “warmists’ theory” – certainly not yet visible in the French and Belgian media – comes from the observation that for the past fifteen years or so the global temperature of the Earth has remained constant. During the same period CO2 emissions have increased by far more than in the past, reaching an unparalleled record this year. Honest climate scientists admit that this observation is an embarrassing inconvenience for their theory. However, attempts to make us believe that the Earth is continuing to warm up persist. Will we have to wait for another twenty, twenty-five or thirty years for the global warming advocates to finally admit that there is no unambiguous correlation between the global temperature of the Earth and human-generated CO2 emissions?
  9. The claim that Hurricane Sandy is due to human CO2 emissions is totally unfounded and has been vigorously contested by numerous meteorologists. This regrettable distortion of the facts has been denounced in an open letter, addressed to the General Secretary of the UN and signed by more than 130 world-renowned scientists, including one of the present authors;
  10. Finally the “abnormal” melting of the Arctic Sea ice, that made the headlines of numerous journals during this summer, was also observed during previous decades. Amazingly the record high increase in Antarctic Sea ice that occurred at exactly the same time has been completely ignored by the very same media. Moreover, no mention has been made of the current, particularly rapid, regeneration of the Arctic Sea ice.

These ten statements are facts. We would be ready to accept that they could be wrong, if evidence were presented to scientifically disprove them. In the meantime, and in view of the lack of coherence and unreliability associated with the numerous predictions made by the IPCC, it is time to set the record straight. The public and politicians must be informed about the hypothetical character of the predominant ‘consensus’ on climate change, which has been uncritically disseminated in the media for more than ten years. If it ever existed, this so-called “climate change consensus” has now been totally undermined by the facts.

Despite the opportunity that we were given by the Royal Academy to raise this issue, we were unable to find any French or Belgian newspaper willing to publish this text. Questioning the impact of mankind on climate change is evidently still a taboo over here.

This article reflects solely the opinions of the Authors.

%d bloggers like this: