Could Mount Agung eruption be the VEI 5+ volcano that is overdue?

October 16, 2017

We have not had a VEI 5+ volcanic eruption for 26 years since the Mount Pinatubo eruption in 1991. In 2015, I pointed out that the probability of a VEI 5+ volcano eruption within 5 years was over 95%. The probability of a VEI 5+ eruption in the next two years must now be approaching 99%.

(see also The next VEI 5+ volcanic eruption is overdue).

It has been 24 26 years since the last VEI 5+ (Mount Pinatubo, 1991, VEI 6) occurred and the probability that a VEI 5+ volcanic eruption will occur within the next 5 years is now over 95%. There are around 10 – 14 VEI 5+ eruptions every hundred years and for the the last 300 years the time between eruptions has been as short as 1 year and as long as 23 years. The current gap could be the longest recorded in three centuries. There are, on average, 2 eruptions of intensity 6 every hundred years and so the probability that an eruption of VEI 6 could occur within 5 years is about 50% (current gap 24 years, average gap 50 years). That a supervolcanic eruption of VEI 7 or greater could occur within the next 5 years is less than 1%.

Mount Agung’s volcanic activity is now reaching levels which suggests that an eruption is “imminent”.

Volcano Agung in Bali is showing worrisome signs of a major eruption, writes German climate blogger Schneefan here. The highest level of activity with multiple tremor episodes were just recorded. You can monitor Agung via live cam and live seismogram.

The 3000-mter tall Agung has been at the highest warning level 4 since September 21.

Schneefan writes that the lava rise has started and that “an eruption can be expected at any time“.

So far some 140,000 people have been evacuated from the area of hazard, which extends up to 12 km from the volcano. Schneefan writes:

Yesterday ground activity by far exceeded the previous high level. Quakes have become more frequent and stronger, which indicates a stronger magma flow (see green in the histogram). Since October 13 there has been for the first time a “nonharmonic trembling (tremor), which can be seen in red at the top of the last two bars of the histogram.”

The colors of the columns in the bar chart from bottom to top stand for perceptible earthquakes (blue) low eartnhquakes (green) surface quakes (orange . Just recently red appeared, signifying non harmonic tremors.  The seismogram below shows what are at times longer period quakes: meaning magma is violently flowing in the volcano. Source:

Since yesterday the seismogram for AGUNG has been showing powerful rumbling (red).

The seismogram of AGUNG shows powerful tremors (level RED). The seismogram is updated every 3 minutes: Source: Seismogramm

Because Agung is located near the equator, a major eruption with ash flying up into the stratosphere would have short-term climatic impacts that could last a few years.

Agung last erupted in 1963 with an explosivity index of VEI 5, sending a plume of ash some 25 km into the atmosphere before leading to a cooling of 0.5°C. The eruption of Pinatubo in the Philippines in 1991 led to a global cooling of 0.5°C.




October 8, 2017

A little irritated today by some stupid behaviour.


Fresh water is globally abundant but often locally mismanaged

October 7, 2017


There is no global water shortage. In many places, the scarcity that exists is caused by bad local planning and inadequate storage and distribution, but there is no global shortage. Variable climate and weather and “natural” catastrophes are often blamed, but wherever scarcity is experienced, it is a lack of planning and/or unbridled expansion and/or a profligate use of available resources which is the real cause.

Abundance or scarcity

Alarmist organisations tend to paint a bleak future:

WWF: Only 3% of the world’s water is fresh water, and two-thirds of that is tucked away in frozen glaciers or otherwise unavailable for our use. As a result, some 1.1 billion people worldwide lack access to water, and a total of 2.7 billion find water scarce for at least one month of the year.

This is a false picture. Just one third of the “3% of the world’s water” is actually equivalent to over one million years of water at current utilisation levels. The amount of global fresh water is not a cause of any scarcity.

More importantly, what they fail to mention is that all human activities use just 0.01% (one ten thousandth) of the annual precipitation just over land. It is not that scarcities don’t exist but they exist mainly because of inadequate planning, mismanagement, incompetence and plain stupidity among those affected. Local resources are grossly overstretched or adequate storage is not planned for by the societies affected. Urban areas expand without planning and exhaust the easily accessible ground water resources available. Rainwater is not harnessed locally. Distribution is ignored. Rivers are polluted upstream. Ignoring common sense has consequences. Often this rank stupidity is at the local community or national government level. The problems are local and the solutions are local. It is easier for local bodies to demand some all encompassing solutions to their problems. But this is a red herring to divert responsibility away from the local incompetence.

Water scarcities are much more due to the local non-application of intelligence than any global or climate or weather events.

Water on Earth

The quantity of water on earth has been pretty constant for about 4 billion years. All the water on earth can be accounted for as 3 main categories

  1. that trapped in molten rocks,
  2. saline water and
  3. “fresh” water.

In the most part water on earth is neither created nor destroyed. Fresh water is essential for human activity. Human activity does not, though, destroy water. A relatively miniscule amount of water is created by combustion processes and bio-degradation processes. But human activity does contaminate “fresh” water and makes for an extra stop for the precipitation over land as it finds its way back to the oceans. It does therefore add some time delays to a small quantity of water within the “normal” water cycle. Human activity requires “fresh” rather than saline water but all global activity uses less than 10×1012 kg/year. Almost 92% of this is for agricultural use (crops, pasture and animals), around 4% for industrial use and less than 4% for domestic use.

The primary – and overwhelmingly dominant source – of fresh water is precipitation (rain, snow, sleet, hail, fog, dew …) from the atmosphere. Global precipitation gives about 506×1015 kg/year, but most of this is over the oceans. Just over the land mass of the earth, precipitation gives about 108×1015 kg/year of fresh water. All human activity thus uses only about 0.01% (one ten thousandth) of the precipitation that occurs just over land. Fresh water scarcity is clearly non-existent at a global level and when averaged over time. Scarcities – and some severe scarcities – occur locally, at certain times. At any instant in time (average) the atmosphere holds about 12.6×1015 kg (12.6 P kg) which itself represents about 1300 years of total human consumption. (With a total precipitation rate of 506×1015 kg/year, water has a residence time in the atmosphere of only about 9 days). It is not, therefore, that the atmosphere, acting as a global reservoir, holds too little which is the cause of the local scarcities. Global reservoirs of fresh, ground water (under the surface in liquid form) amount to 10×1018 kg which is about 1 million years of human utilisation. But these reservoirs are not evenly spread around the world and some are much deeper and less accessible than others. Lakes and swamps and rivers hold some 100×1015 kg of fresh water which represents about 10,000 years of human utilisation. The global capacity of fresh water reservoirs on the surface (10,000 years) and of ground water (1 million years) pose no limitations on human utilisation. The geographical location of these and the seasonal nature of their flows can and do place limits and that, in turn, when combined with a dearth of common sense, leads to local scarcities.

When we include all the water thought to be trapped in the earth’s mantle, we don’t really know how much water there is on earth . We don’t really know how the water got to earth in the first place either. Unlike hydrogen, water vapour in the earth’s atmosphere does not leak out into space. Some surface water is continuously lost into the earth’s interior as tectonic plate movement leads to subduction of land mass. Equally some water from the earth’s crust reaches the surface by volcanic eruptions where degassing of hydroxyl radicals (OH) trapped within molten rocks is released. It is thought that the mass of water on earth has remained virtually constant for billions of years. Very little water is “destroyed” by decomposition into hydrogen and oxygen and very little is “created” (primarily by combustion of hydrocarbons).

How the water came to be on earth is a matter of speculation. It could have been there in the original mix as elemental hydrogen and oxygen which later combined and was then trapped in molten rock. It may have been brought to the earth’s surface as the earth cooled and violent volcanic or tectonic activity brought degassing molten rocks to the surface. Or it could have come from some icy comets or asteroids or meteors which collided with the earth in its infancy. Studies of some zircons indicate that water was present on earth as far back as 4 billion years ago (with the earth’s formation put at 4.5 billion years ago).

The mass of the Earth is estimated to be 5.98×1024 kg (5.98 Yotta kg). Hydrogen needs eight times its own mass of oxygen to make water. On earth the amount of oxygen available is far in excess of that needed to convert all the available hydrogen to water. Water on earth is thus limited by hydrogen rather than oxygen. Just 0.8% of the oxygen present on earth would be sufficient to convert all the available hydrogen (260 ppm of the earth’s mass) to water. If all the available hydrogen could be converted into water we could theoretically have 14×1021 kg of water.  We can account for about 8.4×1021 kg which is 60% of the theoretical maximum. Around 85% (as a crude estimate) lies in the deep mantle and 15% at or near the earth’s surface.

Water at or near the surface, in all its forms (including atmospheric water but excluding water in the mantle), is estimated to have a mass of 1.386×1021 kg (1.386x 109 cubic kilometers). However about five times that (estimated range is from 2 to 10 times ) is possibly held in the mantle within molten rocks. Of the total  surface water about 97.5% is salt water and only 2.5% is fresh water (35×1018 kg). The fresh water “storage capacity” is mainly in solid form (ice sheets, glaciers) and accounts for about 68.7% of the fresh water (24×1018 kg). The remainder is mainly ground water (10.5×1018 kg) with about 0.4×1018 kg as fresh water on the earth’s surface. Around 69% of this surface fresh water is in the form of ice and permafrost and not readily accessible. Lakes and swamps and rivers hold some 100×1015 kg. The water held in all living matter only accounts for about 1.1×1015 kg.

Water reservoirs on earth

1 Peta kg (P kg) = 1015 kg

The Water cycle

It is not just the size of the global reservoirs that matters. More important for human utilisation are the flows making up the water cycle. The dominant loop is the manner in which fresh water enters the atmosphere and then returns to earth. The saline water of the oceans covering some 70% of the earth’s surface is converted to fresh water mainly by evaporation but also by freezing (mainly near the poles). Only the evaporation is of significance for water entering the atmosphere. Ice does sublime and contribute to the water input to the atmosphere but this amount is minuscule compared to the evaporation from the ocean surfaces. Over the earth’s land mass, water enters the atmosphere by evaporation from the soil and open water surfaces. A very small amount is also transferred by transpiration from living things.

Human consumption hardly shows up in the big picture of the water cycle. Of the precipitation which falls on land 70% is re-evaporated. About 2% goes into ground water and thence to the ocean. The remaining 28% returns to the ocean as surface water run-off. The human part which is only about 0.01% of the precipitation on land is just a small side loop in the surface water run-off section and an insignificant loop in the ground water part of the cycle.

Consumption and scarcity

The water footprint of humanity

The WF of the global average consumer was 1,385 m3/y. The average consumer in the United States has a WF of 2,842 m3/y, whereas the average citizens in China and India have WFs of 1,071 and 1,089 m3/y, respectively. Consumption of cereal products gives the largest contribution to the WF of the average consumer (27%), followed by meat (22%) and milk products (7%). 

Human “consumption” today, with a population of 7.1 billion, comprises of about 9.2×1012 kg/year for agriculture and domestic animals, 0.45×1012 kg/year for industrial use and about 0.35×1012 kg/year for domestic use. (10×1012 kg/year in total). Around 1.5 billion humans are subject to scarcities. If we add an additional 30% utilisation (to give a total of 13×1012 kg/year), at the right times and in the right places, it would suffice to eliminate all scarcities. Further suppose that global population increased to around 10 billion by 2100. To avoid scarcities, utilisation would then need to be ramped up to 18.5×1012 kg/year. Precipitation over land would still be about 6,000 times greater than this need. The run-off through lakes and rivers into the oceans would represent about 2,000 times the desired human consumption. Even with the increased utilisation, the global capacity of fresh water reservoirs on the surface (6,000 years) and of ground water (600,000 years) would pose no limitations on human utilisation.


There is no shortage of water. Fresh water scarcities are not a problem of global or temporal availability but essentially of local acquisition, storage and distribution. The solution lies locally – not in grandiose global campaigns. The solutions are relatively simple – but not necessarily inexpensive – but require intelligent planning and implementation of local acquisition, distribution and storage. 




MH370: Too mysterious a disappearance to be accidental

October 3, 2017

I wrote quite a few posts about the MH370 vanishing when it happened almost three and a half years ago.

Australia has now ended its search and  Australian investigators “have delivered their final report into missing Malaysia Airlines flight 370, saying it is “almost inconceivable” the aircraft has not been found”.


“It is almost inconceivable and certainly societally unacceptable in the modern aviation era with 10 million passengers boarding commercial aircraft every day, for a large commercial aircraft to be missing and for the world not to know with certainty what became of the aircraft and those on board,” the Australian Transport Safety Bureau said on Tuesday.

“Despite the extraordinary efforts of hundreds of people involved in the search from around the world, the aircraft has not been located.”

Their final report reiterated estimates from December and April that the Boeing 777 was most likely located 25,000 sq km (9,700 sq miles) to the north of the earlier search zone in the southern Indian Ocean.

But some people somewhere know what happened, but they are not telling. The aircraft and its 239 passengers and crew just vanished in March 2014.

Little has changed since I wrote this:

MH370: One year on and those who know still aren’t telling

March 8, 2015

Some few do know what happened to MH370 a year ago.

My post from April 13th last year speculating that this was a state sponsored and highly successful hijacking, is just as valid or invalid as it was then. There has been much speculation since but no new, certain, evidence has appeared. In fact even the “handshake” tracking which places the plane in the Southern Indian Ocean turns out to be fairly speculative in itself.

Whatever happened to MH370 was no accident. In one year there has been no evidence to alter my belief that this was the most successful hijacking and “disappearing” of a commercial airline and its 239 passengers and crew. And the objective – which was clearly achieved – was to prevent some passengers or cargo or both from reaching Beijing.

MH370: Emirates CEO suggests plane’s flight was controlled, October 11, 2014

MH370: Further indications of a deliberate event to prevent technology reaching Beijing, June 22, 2014

MH370: Very short preliminary report issued – could have been “laundered”, May 2, 2014

MH370: The most successful, state-sponsored hijacking ever?, April 13, 2014

MH370: The altitude excursion which could have rendered most unconscious, April 1, 2014

A deliberate excursion?

EngineAlliance GP7200 engine fails on Air France Airbus380

October 1, 2017

This time it wasn’t a Rolls Royce engine. It was an Engine Alliance (GE/Pratt & Whitney) GP7200 on an Air France Airbus A380-861 (registration F-HPJE).

Fortunately this was one engine on a 4-engine plane. Even two-engine planes (B777 for example) are supposed to be capable of making a controlled descent with just one engine, but losing one in four is a lot less chancy than one in two.


An Air France Airbus A380, operating flight AF66 from Paris-Charles de Gaulle Airport, France, to Los Angeles International Airport, California, USA, diverted to Goose Bay, Canada following an en route engine malfunction (Engine Alliance GP7200). 
The aircraft lost the no.4 engine inlet cowling. A decision was made to divert to Goose Bay where a safe landing was carried out on runway 26 at at 15:42 UTC. 
After landing, ARFF reporting damage to the leading edge of the wing above the no. 4 engine. Additionally, the no.4 engine cowling had disintegrated. 
Photos from the incident seem to show that the entire fan is missing from the engine. 

EngineAlliance is an American aircraft engine manufacturer based in East Hartford, Connecticut. The company is a 50/50 joint venture between GE Aviation, a subsidiary of General Electric, and Pratt & Whitney, a subsidiary of United Technologies.

The GP7200 is a twin spool axial flow turbofan that delivers 70,000 pounds of thrust for the Airbus A380. The GP7200 is derived from two of the most successful wide body engine programs in aviation history—the PW4000 and GE90 families. The engine benefits from each programs’ latest proven technologies and incorporates lessons learned from more than 25 million flight hours of safe operation on both engines. The GP7200 entered service in 2008 with the world’s largest A380 fleet, Emirates. The first GP7200-powered A380 was delivered to Air France in 2009. 

No injuries, no fatalities.

The GP7200 competes with the Rolls Royce Trent 900 for the A380. It has always struck me that with only two suppliers available, this is a strange competition where neither can afford the other to fail. In fact the plane manufacturers and the airlines both need that neither fails. Markets don’t like monopoly situations. History shows that monopoly situations can lead to the death of a product or even a technology. It may take time but alternative technologies do appear.

As I wrote some 7 years ago:

Trent 900 vs. GP7200: Competitive pressures getting too hot? 

…… It follows that for the airlines and the airplane manufacturers that the market (in this case the number of A 380s) be split between the two suppliers such that:

  1. neither supplier gains a dominant market position such that it can dictate the engine price,
  2. each supplier has a large enough market share and sufficient earnings such that their continuation in the market is not jeopardised (for the sake of spares, service, development of new engines and, above all, to avoid a monopoly situation arising by the exit of one supplier).

If either engine supplier has an uncompetitive product – whether for price or for performance – the monopoly becomes inevitable and immediately jeopardises the continuation of the market itself. So if only one engine supplier was available, the A 380 itself becomes non-viable.


The unknowable is neither true nor false

September 22, 2017

Some things are unknowable (Known, unknown and unknowable).

In epistemology (study of knowledge and justified belief), unknowable is to be distinguished from not known.

Fitch’s Paradox of Knowability (Stanford Encyclopedia of Philosophy)

The paradox of knowability is a logical result suggesting that, necessarily, if all truths are knowable in principle then all truths are in fact known. The contrapositive of the result says, necessarily, if in fact there is an unknown truth, then there is a truth that couldn’t possibly be known. More specifically, if p is a truth that is never known then it is unknowable that p is a truth that is never known.

Some believe that everything in the universe can be known through the process of science. Everything that is not known today will be known eventually. I do not quite agree. Some things are, I think, unfathomable, unthinkable – which one can not know. My reasoning is quite simple. The capacity of our brains is limited. That seems undisputed. It is because of our cognitive limitations, we have found it necessary in our invented languages to invent the concept of infinity – for things that are beyond observable. If brain capacity was unlimited there would be no need for the concept of infinity or for words such as “incomprehensible”. Even in mathematics, which, after all, is another language for describing the universe, we also have the concept of infinity. Infinitely large and infinitely small. We can find finite limits for some convergent infinite series, but we can never get to infinity by making finite operations on finite numbers. We cannot fill an infinite volume with a finite bucket or measure an infinitely long line with a finite ruler.

To my layman’s understanding, Cantor’s Continuum Hypothesis (which still cannot be proved), and Gödel’s Incompleteness Theorems (which prove that it cannot be proved in an axiom based mathematics), only confirms for me that:

  1. even our most fundamental axioms in philosophy and mathematics are assumed “true” but cannot be proven to be so, and
  2. there are areas of Kant’s “noumena” which are not capable of being known

It is not just that we do not know what we do not know. We cannot know anything of what is unknowable. In the universe of the knowable, the unknowable lies in the black holes of the current universe and in the time before time began.

I find Kant’s descriptions persuasive

Noumenon, (plural Noumena), in the philosophy of Immanuel Kant, the thing-in-itself (das Ding an sich) as opposed to what Kant called the phenomenon—the thing as it appears to an observer. Though the noumenal holds the contents of the intelligible world, Kant claimed that man’s speculative reason can only know phenomena and can never penetrate to the noumenon.

All human logic and all human reasoning are based on the assumption that what is not true is false, and what is not false must be true. But why is it that there cannot be a state where something is neither true nor false? Why not both false and true at the same time? In fact, it is the logic inherent in our own language that prohibits this state. It is a limitation of language which we cannot avoid. But language is not discovered, it is invented. That “what is not true must be false” is just an assumption – a very basic and deep assumption but still just an assumption.

Following Fitch

in fact there are unknown truths, therefore there must be truths that couldn’t possibly be known.


“Truth” and “that which is not true is false” are assumptions and are definitions inbuilt in our languages. Truth and Falsity are not necessarily mutually exclusive quantum states. They may instead form part of a continuum which is unknowable.  Maybe a truth-time continuum?

(image Forbes)


UN and EU have forgotten that nationalism is the basis for internationalism

September 20, 2017

The part comes before the whole.

Without a definition of the number “one” there is no Number theory. Without the establishment of a single cell there is no life. No bricks no house. When an atom overwhelms an electron, it leaves and the atom is no more. Without weather there is no climate.

Multinational institutions are particularly prone to forgetting what their fundamental building blocks are. To be global one must first be local. To apply universally means first applying to each of the 7.5 billion on earth. Without a strong and healthy nationalism there is no internationalism.

The EU and the UN are excellent examples of how the “large” loses track of its roots. The EU much more than the UN tries to bully its smaller member countries and that cannot be sustained.


Automation can mitigate for a population decline

September 17, 2017

The cold hand of demographics cannot be denied. Global population will begin to decline soon after 2100. The real question is whether an implosion can be avoided, economic decline can be held at bay and that an irreversible death spiral can be avoided.

For over 100 years it has been the threat of unsustainable population explosion which has exercised the minds of governments and policy makers. There are still many people who have spent their lives confronting this problem and cannot adjust to the new reality. This reality is that fertility rate is declining across the world. There is nowhere in the world where the rate is not declining. There are still many countries where the rate is greater than the 2.1 children per woman needed to just hold the population constant.

(From the World Population Prospects 2017)

Soon after 2100 population will begin to decline everywhere across the world. In many countries population decline is already well established. Population decline is compounded by  increasing longevity and a decline in the ratio of working population to aged population. Declining (and ageing) populations threaten the start of a downward death spiral of economic decline.

John Maynard Keynes in a lecture in 1937  “Some Economic Consequences of a Declining Population.” Keynes eugenrev00278-0023

“In the final summing up, therefore, I do not depart from the old Malthusian conclusion. I only wish to warn you that the chaining up of the one devil may, if we are careless, only serve to loose another still fiercer and more intractable.”

The point is that this decline is inevitable. Demographic trends cannot be denied over a matter of 2 or 3 generations. Hopefully the decline will be slow and allow time for corrective actions – provided however that irreversible economic decline does not set in.

Mitigation by automation

The critical parameter will be whether total GDP can be maintained at a level to allow the per capita GDP to be increased or, at least, maintained in spite of a declining labour force.

For the past 500 years (perhaps more), economic activity has been consumer driven and with a surplus of labour always available. Labour and its ready availability was in itself also the capital to be employed. Growth has been achieved by the increase of production exceeding the growth of population. Agricultural production before the industrial age was primarily a function of the labour available. With the advent of the industrial age, the link between labour force and production remained strong but industrialisation allowed an enormous productivity increase. It is the introduction of industrialisation into agricultural production which has also allowed the rapidly increasing population to be fed. However the last 6 or 7 decades has seen the industrial age morph into the age of automation. Automation is now gathering pace. Growth is no longer as dependent upon the availability of labour as it was.

With population declining it is likely that GDP will – in the long term – also decline. Production cannot happen without consumption. However ii is not necessary that the total GDP decline must exceed the decline of population. In the short term there may well be an increase of per capita consumption (and of per capita production). The question becomes how to maintain production with a decreasing work force available. But this is a question that all commercial enterprises face already. In the last 60 – 70 years, reducing work force and increasing automation has become a standard method of reducing cost and increasing productivity.

Within two decades I expect that driver-less vehicles, pilot-less aircraft, army-less wars will be common place. Robot diagnosticians, AI assisted surgeons and teller-less banks are already here. Idiot-less politicians would be nice. The bottom line is that the paradigm that more employees means more production is broken.

Automation has already progressed to the level where just the availability of a young work force is no longer a guarantee that production will (or can be made to) increase. The unemployment level of the less-educated youth of the world is testimony to that. Clearly if automation eliminates the need for human labour for all routine, repetitive tasks, then it also becomes necessary to occupy these young with years of further education. Together with a population living ever longer, the dependency ratio (ratio of non-working to working population) will obviously increase and increase sharply. For a government this can be a nightmare. Revenue generation is from the working population and large chunks of revenue consumption is for the education of the young and the care of the elderly. But that is also because tax revenues are so strongly dependent upon taxing the labour force. If the dependence of production upon the labour force is weakened (as it must be with increasing automation), and since production must eventually match consumption, then the entire taxation system must also tilt towards taxing consumption and away from taxing human labour for its efforts. (Taxing production is effectively also a tax on consumption because production which is not for consumption is not sustainable). Increasing automation also breaks the taxation paradigm.

It seems to me therefore, that a population decline is not something to be afraid of. It is imperative that the decline not be allowed to become an implosion. However a slow decline starting soon after 2100 can be managed. It is going to need over the next 100 years

  1. immigration to mitigate fertility declines,
  2. increasing automation across all manufacturing
  3. increasing use of AI across all fields
  4. increasing education (perhaps mandatory to the age of 25)
  5. increasing opportunities for entrepreneurs (including aged entrepreneurs)
  6. shifting away from income or labour related taxes and towards consumption tax

I can see population developing to “hunt” for a stable, sustainable level at perhaps around 9 – 10 billion with increases coming when new world are opened up to colonise. Pure speculation of course, but as valid as anything else.

Population decline is not something to be afraid of. The next 100 years will be fascinating.


Fewer children under five dying than ever before

September 16, 2017

The 2016 GBD study is out at The Lancet.

The Global Burden of Disease Study (GBD) is the most comprehensive worldwide observational epidemiological study to date. It describes mortality and morbidity from major diseases, injuries and risk factors to health at global, national and regional levels. Examining trends from 1990 to the present and making comparisons across populations enables understanding of the changing health challenges facing people across the world in the 21st century.

One section deals with Mortality, and of major significance is the deaths of children (< 5 years). Around 4.6%  – 340 million – of the worlds population of 7.5 billion is under 5 (UN World Population Prospects 2017). Death is becoming the preserve of the old – as it should be. In the last 50 years, death has shifted decisively away from children to the aged.


About 5 million recorded deaths in 2016 were of children under 5 and had reduced from about 16 million in 1970. In that time the total number of children <5 has doubled. In 2016, deaths of children < 5 has reduced to one-sixth of the proportion in 1970. Almost two-thirds of all deaths are now at ages above 50 years. There are, for the first time, more deaths at ages above 75 than between 50 and 74 and, it would seem, is poised to increase sharply.

About 40% of the world’s population (60% in Africa) is now under 25 years old. The vast majority of them will live to see 75.


Excellence is about improving the best – not of mitigating the worst

September 14, 2017

” Excellence” is always about performance. It also always implies a measurement – not necessarily quantitative – of performance against a “mean” or a “standard” value for such performance. It is not merely about “doing your best” without also surpassing existing standards.   I hesitate to call this a definition of excellence but it is a view of excellence. Continuous improvement is inbuilt in this view of “excellence”, since every time an “average” performance is exceeded, the “average” must shift. Searching for excellence thus requires continuously improving performance whether for an individual or a company or for a society. “Quality” means having some attribute to a higher (improved) level than some standard. “Excellence” is thus closely linked to “quality”. A search for excellence often implies – but not always – a search for improving quality. A value judgement of what is a “better” performance or a “higher” quality is inherent when considering excellence.

image Aberdeen Performing Arts

What is often forgotten is that searching for excellence is all about improving the best, not of eliminating or mitigating the worst. This often becomes a political or ideological matter where resources are spent at the bottom end of the performance scale. That actually becomes a search for the lowest common level and not a search for excellence. It is not possible to search for excellence and simultaneously denounce the elite. Excellence requires an elite.

Evolution by natural selection is not primarily about excellence. The only “performance” factor involved is that of maximising the survivability of an individual’s gene-set. Excellence achieved of any other performance parameter or attribute is accidental. Natural selection, then, is effectively silent about excellence but is not necessarily a bar to excellence. Artificial selection – on the other hand – is all about excellence of some particular attribute or performance parameter (breeding for strength or speed or intelligence or some other genetic factor in dogs for example).

To search for excellence, whether as an individual or as an organisation requires all three of motivation, opportunity and capability. The search fails if any one is missing. It starts with motivation – the desire to act. It can be entirely an internal thing to an individual or it can be due to external events or forces. Without motivation, opportunities are invariably missed and capabilities wastefully unused. Opportunities however are not just random events. They may occur by accident but they can – sometimes – be created and then they can even be designed. Ultimately performance improves and attributes are enhanced by actions. And actions are always constrained by capabilities. The best possible performance is always constrained to be the best performance possible.

The most common, universal barrier is that motivation is lacking. Some performance parameter or attribute is not given sufficient value. Value may be given by peers or generated internally by the performer. Without value being accorded, any motivation to search for excellence of that attribute or performance then withers. By corollary, if poor performance is not a disadvantage, then deterioration is not discouraged either (unless perhaps some minimum threshold value is reached).

Schools must consider both the excellence of individual performance and that of all students as  a group and that of learning as a concept. There can be perceived conflicts of interest here. In most schools more resources are often spent on the weakest group to bring their performance up towards the average. Being close to the average then becomes good enough. The weak students are dragged up towards the average and the strong students – if not self-motivated – drift down towards the average. They often miss the simple arithmetical fact that improving the performance of the best students provides a far greater improvement both for all the individuals and for the group and for learning in general. Often they are hampered by ideological constraints.

In large groups of individuals, whether in commercial enterprises or bureaucracies or health care or sports clubs, excellence still depends upon motivation, opportunity and capability. Clearly, if the target for which excellence is sought is not clear then there is no excellence achieved. It is much easier for a commercial enterprise to define performance parameters or attributes in which excellence is to be sought. They have also the greatest freedom of action in providing motivation, creating opportunities or acquiring capabilities. Bureaucracies are often process keepers. Excellence becomes a very diffuse concept to define. It is difficult to even conceive of excellence when the only parameters which count are minimum level of service at lowest cost.

Excellence is about improving the best – not of mitigating the worst.


%d bloggers like this: