Posts Tagged ‘Science’

Science needs its Gods and religion is just politics

April 11, 2021

This essay has grown from the notes of an after-dinner talk I gave last year. As I recall it was just a 20 minute talk but making sense of my old notes led to this somewhat expanded essay. The theme, however, is true to the talk. The surrounding world is one of magic and mystery. And no amount of Science can deny the magic.

Anybody’s true belief or non-belief is a personal peculiarity, an exercise of mind and unobjectionable. I do not believe that true beliefs can be imposed from without. Imposition requires some level of coercion and what is produced can never be true belief. My disbelief can never disprove somebody else’s belief.

Disbelieving a belief brings us to zero – a null state. Disbelieving a belief (which by definition is the acceptance of a proposition which cannot be proved or disproved) brings us back to the null state of having no belief. It does not prove the negation of a belief.

[ (+G) – (+G) = 0, not (~G) ]

Of course Pooh puts it much better.


Science needs its Gods and religion is just politics


Without first having religions, atheism and agnosticism cannot exist

June 27, 2017

I take science to be the process by which areas of ignorance are explored, illuminated and then shifted into our space of knowledge. One can believe that the scientific method is powerful enough to answer all questions – eventually – by the use of our cognitive abilities. But it is nonsense to believe that science is, in itself, the answer to all questions. As the perimeter surrounding human knowledge increases, what we know that we don’t know, also increases. There is what we know and at the perimeter of what we know, lies what we don’t know. Beyond that lies the boundless space of ignorance where we don’t know what we don’t know.

Religions generally use a belief in the concept of a god (or gods) as their central tenet. By definition this is within the space of ignorance (which is where all belief lives). For some individuals the belief may be so strong that they claim it to be “personal knowledge” rather than a belief. It remains a belief though, since it cannot be proven. Buddhism takes a belief in gods to be unnecessary but – also within the space of ignorance – believes in rebirth (not reincarnation) and the “infinite” (nirvana). Atheism is just as much in the space of ignorance since it is based on the beliefs that no gods or deities or the supernatural do exist. Such beliefs can only come into being as a reaction to others having a belief in gods or deities or the supernatural. But denial of a non-belief cannot rationally be meaningful. If religions and their belief in gods or the supernatural did not first exist, atheism would be meaningless. Atheism merely replaces a belief in a God to a belief in a Not-God.

I take the blind worship of “science” also to be a religion in the space of ignorance. All physicists and cosmologists who believe in the Big Bang singularity, effectively believe in an incomprehensible and unexplainable Creation Event. Physicists who believe in dark matter or dark energy, as mysterious things, vested with just the right properties to bring their theories into compliance with observations of an apparently expanding universe, are effectively invoking magic. When modern physics claims that there are 57 fundamental particles but has no explanation as to why there should be just 57 (for now) or 59 or 107 fundamental particles, they take recourse to magical events at the beginning of time. Why there should be four fundamental forces in our universe (magnetism, gravitation, strong force and weak force), and not two or three or seven is also unknown and magical.

Agnosticism is just a reaction to the belief in gods. Whereas atheists deny the belief, agnostics merely state that such beliefs can neither be proved or disproved; that the existence of gods or the supernatural is unknowable. But by recognising limits to what humans can know, agnosticism inherently accepts that understanding the universe lies on a “higher” dimension than what human intelligence and cognitive abilities can cope with. That is tantamount to a belief in “magic” where “magic” covers all things that happen or exist but which we cannot explain. Where atheism denies the answers of others, agnosticism declines to address the questions.

The Big Bang singularity, God(s), Nirvana and the names of all the various deities are all merely labels for things we don’t know in the space of what we don’t know, that we don’t know. They are all labels for different kinds of magic.

I am not sure where that leaves me. I follow no religion. I believe in the scientific method as a process but find the “religion of science” too self-righteous and too glib about its own beliefs in the space of ignorance. I find atheism is mentally lazy and too negative. It is just a denial of the beliefs of others. It does not itself address the unanswerable questions. It merely tears down the unsatisfactory answers of others. Agnosticism is a cop-out. It satisfies itself by saying the questions are too hard for us to ever answer and it is not worthwhile to try.

I suppose I just believe in Magic – but that too is just a label in the space of ignorance.


 

Science needs some scienticians

June 18, 2014

Physic gave rise to physicians long before physics was practiced by a physicist,

Mathematics gives mathematicians, but who would trust a mathematist. 

A practitioner of an “ology” has an honourable profession,

So biologistsoncologists, archaeologists and geologists can be numbered by the million. 

Without the richness of an “ist” modern politics would be barren,

politicist has a murky trade but he is not a politician

We have leftists and rightists and socialists and you can even find some libertarians,

But for all the mayhem in the world, you will not find any extremians.

Environmentalists and conservationists are politically very fashionable,

But their devious methods have now become – rather questionable. 

Philosophy was where it started but we rarely refer to philosophists,

And many of the scientists of today are little more than sophists. 

It was only in 1840 that scientists were one of Whewell’s inventions,

But they are now two-a-penny, and we could do with a few scienticians.

It should be quite clear that I think that there are far too many who claim to be scientists though they do no science. It then becomes useful to distinguish the real scienticians from the rabble. And perhaps the same could apply to the real economians among the multitude of clerks who call themselves economists.

Number of citations and excellence in science

February 10, 2014

Scientific excellence can only truly be judged by history. But history has eyes only for impact and if excellent science causes no great change to science orthodoxy, it is soon forgotten. For a scientist the judgements of history long after he performs his science are of no real significance. Even where academic freedom is the main motivator for the scientist,  the degrees of freedom available are related to academic success. An academic or scientific career depends increasingly on contemporaneus judgements – and here social networking, peer review and bibliometric factors are decisive. There may well be some correlation between academic success and the “goodness” of the scientist but it is not the success or the bibliometrics which are causative.

As Lars Walloe puts it: Walloe-on-Exellence

In the evaluation process many scientists and nearly all university and research council administrators love all kind of bibliometric tools. This has of course a simple explanation. The “bureaucracy” likes to have a simple quantitative tool, which can be used with the aid of a computer and the internet to give an “objective” measure of excellence. However, excellence is not directly related either to the impact factor of the journal in which the work is published, or to the number of citations, or to the number of papers published, or even to some other more sophisticated bibliometric indices. Of course there is some correlation, but it is in my judgement weaker than what many would like to believe, and uncritical use of these tools easily leads to wrong conclusions. For instance the impact factor of a journal is mainly determined by the very best papers published in it and not so much by the many ordinary papers published. We know well that even in high impact factor journals like Science and Nature or high impact journals in more specialized fields, from time to time not so excellent papers are being published. 

…..  I often meet scientists for whom to obtain high bibliometric factors serve as a prime guidance in their work. Too many of them are really not that good, but were just lucky or work in a field where it was easier to get many citations. …..If you are working with established methods in a popular field you can be fairly sure to get your papers published. I can mention in details some medical fields were I know that this has happened or is happening today. The scientists in such fields get a high number of publications and citations, but the research is not necessarily excellent. 

And getting your paper published has now become so important in the advancement of an academic career that journals are proliferating. Many of the new journals have now shifted their business models to be based on author’s fees and not on volume of readership. This is a very “safe” business model since profits are ensured before the journal has even been published and if the journal is an on-line journal then costs are minimal. It is virtually the “self-publishing” of papers. You pay your money and get your paper published.

The reality today is that more papers are being published by more authors in more journals than ever before. But fewer are being actually read. Papers are cited without having been read – let alone understood.

Skeptical Scalpel:

Another reason could be that publishers, particularly those who charge authors fees for publishing, are in the business of making money.

Authoring journal articles is not only enhancing to one’s CV (the old “publish or perish” cliché), it is required by Residency Review Committees as evidence of “scholarly activity” in training programs. Maybe it’s good for attracting referrals too.

The publish or perish ethos has led to a proliferation of the number of authors per paper!

First noted in 1993 by a paper in Acta Radiologica and a letter in the BMJ, the number of authors per paper has risen dramatically over the years. 
study of 12 radiology journals found the number of authors per paper doubled from 2.2 in 1966 to 4.4 in 1991.  A review of Neurosurgery and the Journal of Neurosurgery spanned 50 years. the average went from 1.8 authors per article in 1945 to 4.6 authors in 1995. 
Of note, the above two articles were each written by a single author. 
Three psychiatrists from Dartmouth analyzed original scientific articles in four of the most prestigious journals in the United States—Archives of Internal Medicine, Annals of Internal Medicine, Journal of the American Medical Association, and the New England Journal of Medicine—from 1980 to 2000. They found that the mean number of authors per paper increased from 4.5 to 6.9. The same is true for two plastic surgery journals, which saw the average number of authors go from 1.4 to 4.0 and 1.7 to 4.2 in the 50 years from 1955 to 2005. The number of single-author papers went from 78% to 3% in one journal and 51% to 8% another.
In orthopedics, a 
review of the American and British versions of the Journal of Bone and Joint Surgery for 60 years from 1949 to 2009 showed an increase of authors per paper from 1.6 to 5.1.
An impressive  rise in the number of authors took place in two leading thoracic surgery
 journals. For the Journal of Thoracic and Cardiovascular Surgery the increase was 1.4  in 1936 to 7.5 2006 and for Annals of Thoracic Surgery it was 3.1 in 1966 to 6.8 in 2006. 

And the winner is a paper with 3171 authors! Needles to say it comes from Big Science and the Large Hadron Collider:

the paper with the most authors is “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC” in a journal called “Physics Letters B” with 3171. The list of authors takes up 9 full pages.

Too many journals, too many papers, too many authors and too many citations. But that does not mean there is more excellence in science.

Numeracy and language

December 2, 2013

I tend towards considering mathematics a language rather than a science. In fact mathematics is more like a family of languages each with a rigorous grammar. I like this quote:

R. L. E. SchwarzenbergerThe Language of Geometry, in A Mathematical Spectrum Miscellany, Applied Probability Trust, 2000, p. 112:

My own attitude, which I share with many of my colleagues, is simply that mathematics is a language. Like English, or Latin, or Chinese, there are certain concepts for which mathematics is particularly well suited: it would be as foolish to attempt to write a love poem in the language of mathematics as to prove the Fundamental Theorem of Algebra using the English language.

Just as conventional languages enable culture and provide a tool for social communication, the various languages of mathematics, I think, enable science and provide a tool for scientific discourse. I take “science” here to be analaogous to a “culture”. To follow that thought then, just as science is embedded within a “larger” culture, so is mathematics embedded within conventional languages. This embedding shows up as the ability of a language to deal with numeracy and numerical concepts.

And that means then the value judgement of what is “primitive” when applied to language can depend upon the extent to which mathematics and therefore numeracy is embedded within that language.

GeoCurrents examines numeracy embedded within languages:

According to a recent article by Mike Vuolo in Slate.com, Pirahã is among “only a few documented cases” of languages that almost completely lack of numbers. Dan Everett, a renowned expert in the Pirahã language, further claims that the lack of numeracy is just one of many linguistic deficiencies of this language, which he relates to gaps in the Pirahã culture. ….. 

The various types of number systems are considered in the WALS.info article on Numeral Bases, written by Bernard Comrie. Of the 196 languages in the sample, 88% can handle an infinite set of numerals. To do so, languages use some arithmetic base to construct numeral expressions. According to Comrie, “we live in a decimal world”: two thirds of the world’s languages use base 10 and such languages are spoken “in nearly every part of the world”. English, Russian, and Mandarin are three examples of such languages. ….. 

Around 20% of the world’s languages use either purely vigesimal (or base 20) or a hybrid vigesimal-decimal system. In a purely vigesimal system, the base is consistently 20, yielding the general formula for constructing numerals as x20 + y. For example, in Diola-Fogny, a Niger-Congo language spoken in Senegal, 51 is expressed as bukan ku-gaba di uɲɛn di b-əkɔn ‘two twenties and eleven’. Other languages with a purely vigesimal system include Arawak spoken in Suriname, Chukchi spoken in the Russian Far East, Yimas in Papua New Guinea, and Tamang in Nepal. In a hybrid vigesimal-decimal system, numbers up to 99 use base 20, but the system then shifts to being decimal for the expression of the hundreds, so that one ends up with expressions of the type x100 + y20 + z. A good example of such a system is Basque, where 256 is expressed as berr-eun eta berr-ogei-ta-hama-sei ‘two hundred and two-twenty-and-ten-six’. Other hybrid vigesimal-decimal systems are found in Abkhaz in the Caucasus, Burushaski in northern Pakistan, Fulfulde in West Africa, Jakaltek in Guatemala, and Greenlandic. In a few mostly decimal languages, moreover, a small proportion of the overall numerical system is vigesimal. In French, for example, numerals in the range 80-99 have a vigesimal structure: 97 is thus expressed as quatre-vingt-dix-sept ‘four-twenty-ten-seven’. Only five languages in the WALS sample use a base that is neither 10 nor 20. For instance, Ekari, a Trans-New Guinean language spoken in Indonesian Papua uses base of 60, as did the ancient Near Eastern language Sumerian, which has bequeathed to us our system of counting seconds and minutes. Besides Ekari, non-10-non-20-base languages include Embera Chami in Colombia, Ngiti in Democratic Republic of Congo, Supyire in Mali, and Tommo So in Mali. …… 

Going back to the various types of counting, some languages use a restricted system that does not effectively go above around 20, and some languages are even more limited, as is the case in Pirahã. The WALS sample contains 20 such languages, all but one of which are spoken in either Australia, highland New Guinea, or Amazonia. The one such language found outside these areas is !Xóõ, a Khoisan language spoken in Botswana. ……. 

Read the whole article. 

Counting monkey?

In some societies in the ancient past, numeracy did not contribute significantly to survival as probably with isolated tribes like the Pirahã. But in most human societies, numeracy was of significant benefit especially for cooperation between different bands of humans. I suspect that it was the need for social cooperation which fed the need for communication within a tribe and among tribes, which in turn was the spur to the development of language, perhaps over 100,000 years ago. What instigated the need to count is in the realm of speculation. The need for a calendar would only have developed with the development of agriculture. But the need for counting herds probably came earlier in a semi-nomadic phase. Even earlier than that would have come the need to trade with other hunter gatherer groups and that  probably gave rise to counting 50,000 years ago or even earlier. The tribes who learned to trade and developed the ability and concepts of trading were probably the tribes that had the best prospects of surviving while moving from one territory to another. It could be that the ability to trade was an indicator of how far a group could move.

And so I am inclined to think that numeracy in language became a critical factor which 30,000 to 50,000 years ago determined the groups which survived and prospered. It may well be that it is these tribes which developed numbers, and learned to count, and learned to trade that eventually populated most of the globe. It may be a little far-fetched but not impossible that numeracy in language may have been one of the features distinguishing Anatomically Modern Humans from Neanderthals. Even though the Neanderthals had larger brains and that we are all Neanderthals to some extent!

Science is losing its ability to self-correct

October 20, 2013

With the explosion in the number of researchers, the increasing rush to publication and the corresponding explosion in traditional and on-line journals as avenues of publication, The Economist carries an interesting article making the point that the assumption that science is self-correcting is under extreme pressure. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.”

The field of psychology and especially social psychology has been much in the news with the dangers of “priming”.

“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.

Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.

It is not just “soft” fields which have problems. It is apparent that in medicine a large number of published results cannot be replicated

… irreproducibility is much more widespread. A few years ago scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. According to a piece they wrote last year in Nature, a leading scientific journal, they were able to reproduce the original results in just six. Months earlier Florian Prinz and his colleagues at Bayer HealthCare, a German pharmaceutical giant, reported in Nature Reviews Drug Discovery, a sister journal, that they had successfully reproduced the published results in just a quarter of 67 seminal studies.

The governments of the OECD, a club of mostly rich countries, spent $59 billion on biomedical research in 2012, nearly double the figure in 2000. One of the justifications for this is that basic-science results provided by governments form the basis for private drug-development work. If companies cannot rely on academic research, that reasoning breaks down. When an official at America’s National Institutes of Health (NIH) reckons, despairingly, that researchers would find it hard to reproduce at least three-quarters of all published biomedical findings, the public part of the process seems to have failed.

It is not just that research results cannot be replicated. So much of what is published is just plain wrong and the belief that science is self-correcting is itself under pressure

Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think. …… Statistical mistakes are widespread. The peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate. Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.” 

…… In 2005 John Ioannidis, an epidemiologist from Stanford University, caused a stir with a paper showing why, as a matter of statistical logic, the idea that only one such paper in 20 gives a false-positive result was hugely optimistic. Instead, he argued, “most published research findings are probably false.” 

The tendency to only publish positive results leads also to statistics being skewed to allow results to be shown as being poitive

The negative results are much more trustworthy; …….. But researchers and the journals in which they publish are not very interested in negative results. They prefer to accentuate the positive, and thus the error-prone. Negative results account for just 10-30% of published scientific literature, depending on the discipline. This bias may be growing. A study of 4,600 papers from across the sciences conducted by Daniele Fanelli of the University of Edinburgh found that the proportion of negative results dropped from 30% to 14% between 1990 and 2007. Lesley Yellowlees, president of Britain’s Royal Society of Chemistry, has published more than 100 papers. She remembers only one that reported a negative result.

…. Other data-heavy disciplines face similar challenges. Models which can be “tuned” in many different ways give researchers more scope to perceive a pattern where none exists. According to some estimates, three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”

The idea of peer-review being some kind of a quality check of the results being published is grossly optimistic

The idea that there are a lot of uncorrected flaws in published studies may seem hard to square with the fact that almost all of them will have been through peer-review. This sort of scrutiny by disinterested experts—acting out of a sense of professional obligation, rather than for pay—is often said to make the scientific literature particularly reliable. In practice it is poor at detecting many types of error.

John Bohannon, a biologist at Harvard, recently submitted a pseudonymous paper on the effects of a chemical derived from lichen on cancer cells to 304 journals describing themselves as using peer review. An unusual move; but it was an unusual paper, concocted wholesale and stuffed with clangers in study design, analysis and interpretation of results. Receiving this dog’s dinner from a fictitious researcher at a made up university, 157 of the journals accepted it for publication. ….

……. As well as not spotting things they ought to spot, there is a lot that peer reviewers do not even try to check. They do not typically re-analyse the data presented from scratch, contenting themselves with a sense that the authors’ analysis is properly conceived. And they cannot be expected to spot deliberate falsifications if they are carried out with a modicum of subtlety.

Fraud is very likely second to incompetence in generating erroneous results, though it is hard to tell for certain. 

And then there is the issue that all results from Big Science can never be replicated because the cost of the initial work is so high. Medical research or clinical trials are also extremely expensive. Journals have no great interest to publish replications (even when they are negative). And then, to compound the issue, those who provide funding are less likely to extend funding merely for replication or for negative results.

People who pay for science, though, do not seem seized by a desire for improvement in this area. Helga Nowotny, president of the European Research Council, says proposals for replication studies “in all likelihood would be turned down” because of the agency’s focus on pioneering work. James Ulvestad, who heads the division of astronomical sciences at America’s National Science Foundation, says the independent “merit panels” that make grant decisions “tend not to put research that seeks to reproduce previous results at or near the top of their priority lists”. Douglas Kell of Research Councils UK, which oversees Britain’s publicly funded research argues that current procedures do at least tackle the problem of bias towards positive results: “If you do the experiment and find nothing, the grant will nonetheless be judged more highly if you publish.” 

Trouble at the lab 

The rubbish will only decline when there is a cost to publishing shoddy work which outweighs the gains of adding to a researcher’s list of publications. At some point researchers will need to be held liable and accountable for their products (their publications). Not just for fraud or misconduct but even for negligence or gross negligence when they do not carry out their work using the best available practices of the field. These are standards that some (but not all) professionals are held to and there should be no academic researcher who is not also subject to such a standard. If peer-review is to recover some of its lost credibility then anonymous reviews must disappear and reviewers must be much more explicit about what they have checked and what they have not.

Fluid jets and fishbones

August 9, 2013

Just a few examples from a striking gallery of pictures by  John W. M. Bush (MIT Mathematics)

Colliding jets and the patterns that ensue.

Fishbone john bush

Fishbone john bush

fluids john bush

colliding jets john bush

We examine the form of the free surface flows resulting from the collision of equal jets at an oblique angle. Glycerol-water solutions with viscosities of 15-50 cS were pumped at flow rates of 10-40 cc/s through circular outlets with diameter 2 mm. … At low flow rates, the resulting stream takes the form of a steady fluid chain, a succession of mutually orthogonal fluid links, each comprised of a thin oval sheet bound by relatively thick fluid rims. The influence of viscosity serves to decrease the size of successive links, and the chain ultimately coalesces into a cylindrical stream. As the flow rate is increased, waves are excited on the sheet, and the fluid rims become unstable.  The rim appears blurred to the naked eye; however, strobe illumination reveals a remarkably regular and striking flow instability. Droplets form from the sheet rims but remain attached to the fluid sheet by tendrils of fluid that thin and eventually break. The resulting flow takes the form of fluid fishbones, with the fluid sheet being the fish head and the tendrils its bones. Increasing the flow rate serves to broaden the fishbones.  In the wake of the fluid fish, a regular array of drops obtains, the number and spacing of which is determined by the pinch-off of the fishbones. 

h/t Science is Beauty

 

Noted in passing 21st July 2013

July 21, 2013

map projections galore

Map Projections Galore

More on cartography and map projections.

The linguistic forensics which unmasked JK Rowling as the mystery author Robert Galbraith.

The drop of tar pitch finally fell after 69 years.

Singing in unison in a choir leads to heart beats being synchronised.

The Indian monsoon is almost half-over and rainfall is running 16% above the long term average. In spite of the floods in Uttrakhand this monsoon will probably be classified as a “good” monsoon.

A Viking trading post,  Steinkjer, mentioned in the Norse sagas and dating from 1000 years ago has probably been identified.

The evidence is mounting that there was a pre-Toba expansion Out of Africa and into Asia around 90-100,000 years ago followed by another post-Toba expansion which then went all the way to Australia. The second wave would have mixed with the first wave survivors of the Toba eruption who were probably the first AMH to intermingle with the Denisovans.

The shale gas bonanza continues in the UK and the advantages are being pushed hard even by Bjorn Lomborg.

Climate Models and Pepsodent

June 10, 2013

You’ll wonder where the warming went

when you brush your models with excrement

                                                                                                                                         with apologies to Pepsodent

Climate models just aren’t good enough – yet.

As real observations increasingly diverge from model results, the global warming establishment is reacting in 2 ways:

  1. Denial by the Warmist orthodoxy who prefer model results to real data , and
  2. Real scientists who have begun to questions the assumptions on which these models are based.

Two articles have recently been published in the mainstream scientific literature which question climate models.

1. What Are Climate Models Missing?Bjorn Stevens and Sandrine BonyScience, 31 May 2013, Vol. 340 no. 6136 pp. 1053-1054 , DOI: 10.1126/science.1237554

Abstract: Fifty years ago, Joseph Smagorinsky published a landmark paper (1) describing numerical experiments using the primitive equations (a set of fluid equations that describe global atmospheric flows). In so doing, he introduced what later became known as a General Circulation Model (GCM). GCMs have come to provide a compelling framework for coupling the atmospheric circulation to a great variety of processes. Although early GCMs could only consider a small subset of these processes, it was widely appreciated that a more comprehensive treatment was necessary to adequately represent the drivers of the circulation. But how comprehensive this treatment must be was unclear and, as Smagorinsky realized (2), could only be determined through numerical experimentation. These types of experiments have since shown that an adequate description of basic processes like cloud formation, moist convection, and mixing is what climate models miss most.

2. Emerging selection bias in large-scale climate change simulations, Kyle L. Swanson, Geophysical Research Letters, online 16th May 2013, DOI: 10.1002/grl.50562

Abstract: Climate change simulations are the output of enormously complicated models containing resolved and parameterized physical processes ranging in scale from microns to the size of the Earth itself. Given this complexity, the application of subjective criteria in model development is inevitable. Here we show one danger of the use of such criteria in the construction of these simulations, namely the apparent emergence of a selection bias between generations of these simulations. Earlier generation ensembles of model simulations are shown to possess sufficient diversity to capture recent observed shifts in both the mean surface air temperature as well as the frequency of extreme monthly mean temperature events due to climate warming. However, current generation ensembles of model simulations are statistically inconsistent with these observed shifts, despite a marked reduction in the spread among ensemble members that by itself suggests convergence towards some common solution. This convergence indicates the possibility of a selection bias based upon warming rate. It is hypothesized that this bias is driven by the desire to more accurately capture the observed recent acceleration of warming in the Arctic and corresponding decline in Arctic sea ice. However, this convergence is difficult to justify given the significant and widening discrepancy between the modeled and observed warming rates outside of the Arctic.

                                                                                                                                 

Perceptions of beauty

June 4, 2013

Science is Beauty

Science is Beauty

Perceptions of Beauty from  Philosophy of Beauty (Department of PhilosophyUniversity of Maryland)

Symmetry and asymmetry

 

Description: U:\newwebsite\Btynotes\Labrets.jpg

Labrets in tribal societies: are they considered beautiful? If not, why wear them?


%d bloggers like this: