Flavouring the seasoning gave us the oldest profession

November 20, 2020

Once upon a time, a designated chef at an ancient hominin hearth demanded compensation for his culinary art and started the oldest profession. Cooking predates the oldest cave paintings and may well be the oldest human art form.

Preserving is unambiguous but salting is a word that is rarely used anymore. The distinction in language between seasoning and flavouring is not so much ambiguous as wishful thinking. Theoretically, seasoning is considered the use of additives which allegedly enhance existing flavours, whereas flavouring adds different flavours. In practice this is a nonsense distinction. We have our five or possibly seven basic taste receptors (sweet, sour, bitter, salty, umami and maybe pungency and a fatty richness) and our olfactory receptors which can distinguish a myriad smells.

Five basic tastes – sweet, sour, bitter, salty and umami (savory) are universally recognized, although some cultures also include pungency and oleogustus (“fattiness”). The number of food smells is unbounded; a food’s flavor, therefore, can be easily altered by changing its smell while keeping its taste similar.

Any particular flavour we perceive in our brains is then due to a particular combination of activated taste and smell receptors together. With a change in sufficient activated taste or smell receptors our brains recognize a change in flavour. Generally seasoning involves salt (always) and sometimes some pepper and acidic matter (lime, vinegar, ….). Flavouring is considered predominantly to be through the use of herbs and spices. However, the difference between seasoned and unseasoned is a difference of perceived flavour in our brains. No self-respecting chef will ever admit that seasoning is merely a sub-set of flavouring, but even chefs must be allowed their self aggrandizement.  It is entirely false that proper seasoning cannot be tasted. A lack of salt is perceived when there is a lack of an expected activation of salt receptors. Adding salt always changes the combination of activated receptors and is always a change of flavour. Cook books generally perpetuate the misconceptions.

Canadian Baker 

Many ingredients are used to enhance the taste of foods. These ingredients can be used to provide both seasoning and flavouring.

  • Seasoning means to bring out or intensify the natural flavour of the food without changing it. Seasonings are usually added near the end of the cooking period. The most common seasonings are salt, pepper, and acids (such as lemon juice). When seasonings are used properly, they cannot be tasted; their job is to heighten the flavours of the original ingredients.
  • Flavouring refers to something that changes or modifies the original flavour of the food. Flavouring can be used to contrast a taste such as adding liqueur to a dessert where both the added flavour and the original flavour are perceptible. Or flavourings can be used to create a unique flavour in which it is difficult to discern what the separate flavourings are. 

Seasoning is always about changing perceived flavour and is a particular sub-set of flavouring. The story that seasoning originates with food preservation through the use of salt, whereas the use of herbs and spices for flavouring derives from when hunter-gatherers wrapped food in aromatic leaves for transport is plausible but little more than speculation.  Salt is inorganic and is not considered a spice but is the major ingredient for seasoning as opposed to flavouring. Herbs and spices are always organic and plant-based. (The proposed use of crushed insects as flavouring can safely be ignored. The use of cochineal insects – E120 – to give a carmine food colouring is not relevant.) Yet the manner we use small quantities of salt with foods is much too similar to the manner we use small quantities of herbs and spices not to have been the role-model and the precursor for the culinary use of herbs and spices.

Though this history is as presented by a purveyor of spices, it is both informative and credible.

History of Spices 

Abundant anecdotal information documents the historical use of herbs and spices for their health benefits. Early documentation suggests that hunters and gatherers wrapped meat in the leaves of bushes, accidentally discovering that this process enhanced the taste of the meat, as did certain nuts, seeds, berries, and bark. Over the years, spices and herbs were used for medicinal purposes. Spices and herbs were also used as a way to mask unpleasant tastes and odors of food, and later, to keep food fresh. Ancient civilizations did not distinguish between those spices and herbs used for flavoring from those used for medicinal purposes. When leaves, seeds, roots, or gums had a pleasant taste or agreeable odor, it became in demand and gradually became a norm for that culture as a condiment.

Our taste receptors did not evolve for the purposes of culinary pleasure. Bitterness detection is clearly a defense mechanism. Most animals reject bitter foods as a defense against toxins and poisons. All animals need salt. Mammal brains are designed to prevent a debilitating lack of sodium and have evolved the detection of saltiness as a tool. A craving for salty food has been shown to emerge spontaneously (and not as learned behaviour) with sodium deficiency. This has been shown to apply to many animals including sheep, elephants, moose, and primates who seek out salty food when suffering sodium deficiency. It is very likely that the capability to detect sweetness has also evolved as a way of urgently seeking energy rich foods. Exactly how or why it became important to detect sourness or umaminess is in the realm of speculationVegetarian food contains less salt than meat or fish. Our primate ancestors were mainly vegetarian and, like primates today, would have resorted to eating pith and rotting wood to counter sodium deficiencies. 

Hunger for salt

When multicellular organisms evolved and crawled up the beaches to dry land, they had to take the seawater with them in the blood and other body fluids. The mineral content of human blood plasma today is still much like that of the seas of the Precambrian era in which life arose. …..  And the ancestors of man for at least 25 million of the last 30 million years were almost certainly vegetarians, and therefore got little salt in their diets because most plants store little salt. To compensate for the scarcity of a substance vital to life, the brains of our ancestors and those of other mammals developed powerful strategies for getting and keeping salt. Inborn, Not Learned.

….. sudden improvement after one copious salt meal may also help explain the ritual acts of cannibalism once practiced by tribes in the Amazon jungles, the highland regions of New Guinea and elsewhere. Sometimes the body of a fallen foe was eaten in a final act of triumph and to absorb magically the strength of the defeated enemy. In other cultures, bones or other parts of a departed relative were eaten as a final act of devotion and also to gain back the strength of the dead person.

There are those who suggest that human use of salt as seasoning (as opposed to for preservation) only took off in the Neolithic after the advent of agriculture and our diet became more vegetarian. I don’t find this theory entirely plausible. Before hominins and bipedalism (c. 4 million years ago) our ancestors were primarily vegetarian. Meat eating became more prevalent once bipedalism led to a more actively predatory life-style as hunter gatherers. With more meat, diet now included larger amounts of salt and detection of saltiness was needed less for survival and could be diverted to culinary aesthetics. The control of fire appears around 2 million years ago and coincides roughly with a shift to eating cooked meats and the rapid (in evolutionary terms) increase of hominin brain size. I can well imagine a hominin individual – perhaps even a Neanderthal – designated as the chef for the day and being berated for lack of seasoning with the grilled mammoth steak.

In my story, the use of salt with cooked food as seasoning and to enhance flavour must go back – perhaps a million years – to our hunter-gatherer forbears who had shifted to a meat-rich diet.  It is thus my contention that it is this shift to cooked meat which released our flavour receptors from survival tasks and enabled them to be diverted to culinary aesthetics. Even the use of herbs and spices comes well before the Neolithic agricultural revolution (around 12,000 years ago). Herbs and spices being organic do not survive long and are very rare in the archaeological record. However, pots from about 25,000 years ago containing residues of cumin and coriander have been found. The theory that hunter-gatherers packaged meats for travel in large leaves and added – by trial and error – other plant-based preservatives or flavourings, is not implausible. The medicinal use of herbs and spices must also have been discovered around this time. In any event, even the first use of herbs and spices purely for flavouring must go back at least 50,000 years. Though diet must have included more vegetarian food after the advent of agriculture, the culinary arts of seasoning and flavouring had already been well established before the Neolithic. By the time we come to the ancient civilizations of 7 – 8,000 years ago, more than 100 herbs and spices were known and regularly used.

Whether first for food-wrapping or for medicinal use or for use as preservatives, the use of salt and herbs and spices entirely and specifically to make food taste better marks the beginning of the culinary art. No doubt there were many cases of trial and accident and failures and error. The failed attempts did not make it to the stories of spices though some are now probably included in the history of poisons. There is a case to be made for the culinary profession to be considered the oldest in the world.

image univ of minnesota

Why humans chose the 7-day week

November 17, 2020

It was another Sunday but, being retired and in these Corona-times, the days of the week are merging into each other and are difficult to tell apart. My thoughts turned, again, to when and how and why the seven-day week was invented. While the primary purpose of the “week” today is to define a recurring separation between days of rest and days of labour, the week is also used to organise many other recurring human activities. It occurs to me that the reasons for inventing an artificial, recurring period of a few days, shorter than a month, must be based on

  1. the occurrence of periodic and repetitive human activity within a society, and
  2. the need to organise and plan such activity

Both these requirements precede, I think, the choice of 5 or 10 or 6 or 7 days as the length of the period. The need to have a period shorter than a month must come before the choice of length of period. The most important function of the period is now to identify periodic days of “rest” from days of labour. It seems that even in prehistory, in predominantly agricultural communities, this separation of days of rest from days of labour was important. In the past, rest-days were often also days of regular and organised worship. Social traditions built up around these periods (meeting family and friends and congregations of society members). Working practices during industrialisation adapted to weekly cycles. Organised sport today depends existentially upon the regular, repeating days of “leisure”. If the length of this period, as a sub-period of the lunar cycle, were to be chosen today we would be faced with the same limited choices that humans faced perhaps 12,000 years ago. The practical choice lies only between 2, 3, 4, 5 or 6 sub-periods of the month, giving respectively weeks of 14/15, 10, 7, 6 or 5 days. (The Romans, for a while, used an 8-day week, but such a week is out of step with months, seasons and years. An 8-day week has nothing to recommend it). The days within each period needed to be identified separately so that tasks could be allocated to specific days. It is probably just the difficulty of remembering 14 or 15 specific weekdays which eliminated the choice of half-monthly weeks. It would have been entirely logical to choose 10 days in a week (and would have had the added advantage of very easily naming the days after numbers). A 5-day week would also have had a natural logic. In fact, 5-day and 10-day weeks have been attempted at various times but have not caught on. For some reason(s) the 7-day week has been the most resilient and now its global domination is unchallenged. But the compelling reasons for choosing seven day periods are lost in the mists of history.

A quick search revealed that I had written about this 7 years ago:

Another Sunday, another week — but why?

There are no discernible periodicities that we have been able to find outside ourselves which take 7 days. There are no periodicities within ourselves either that are 7 days or multiples of 7 days.  There are no celestial or astronomical cycles in tune with 7 days. There are no movements of the sun or the moon or the stars that give rise to a 7-day period. There are no weather or climate phenomena that repeat with a 7-day period. There are no human behavioural patterns that dance to a 7-day tune. There are no living things that have a 7-day life cycle. (There is a branch of pseudoscience which claims that living cells may be associated with a weekly or a half-weekly cycle – a circaseptan or a circasemiseptan rythm – but this is still in the realms of fantasy).

It would seem logical that our ancestors must have first noted the daily cycle long before they were even recognisable as human.  As humans they probably then noted the lunar cycle of about 29 days and the yearly cycle of about 365 days. Our distant ancestors would also have noted that the period of the yearly cycle was a little more than 12 lunar cycles. By about 35,000 years ago we have evidence that the lunar cycle was known and was being tracked. This evidence is in the form of a tally stick with 29 marks – the Lebombo bone.

The invention of the seven-day week can best be dated to be at least 5,000 years ago to the time of the Babylonians. It was certainly long before the Old Testament came to be written to fit with the 7-day week which had already been invented and established. The story goes that

the seven-day week was actually invented by the Assyrians, or by Sargon I (King of Akkad at around 2350 B.C.), passed on to the Babylonians, who then passed it on to the Jews during their captivity in Babylon around 600 B.C.  The ancient Romans used the eight-day week, but after the adoption of the Julian calendar in the time of Agustus, the seven-day week came into use in the Roman world. For a while, both the seven and eight day weeks coexisted in the Roman world, but by the time Constantine decided to Christianize the Roman world (around A.D. 321) the eight-day weekly cycle had fallen out of use in favor of the more popular seven-day week.

The idea that the 7-days originates from a division of the lunar cycle into 4 seems improbable. The lunar cycle (synodic period) is 29.5305882 days long. Three weeks of 10 days each or five 6 day weeks would fit better. That the annual cycle of 365.2425 days comes to dominate is not so surprising. Our calendar months are now attuned to the annual cycle and have no direct connection to the lunar cycle. But it is our 7 – day weeks which remain fixed. We adjust the length of our months and have exactly 365 days for each of our  normal years. We then add an extra day every 4 years  but omit 3 such extra days in every 400 years to cover the error. We make our adjustments by adding a day to the month of February for the identified leap years but we do not mess with the 7 days of the week.

It is far more likely that the 7 days comes from the seven celestial objects visible to the naked eye from earth and probably known to man some 5,000 to 10,000 years ago. They were familiar with the Sun, the Moon, Mars, Mercury, Jupiter, Venus, and Saturn by then. Naturally each was a god in his own heaven and had to have a day dedicated just to him/her/it. The same 7 celestial objects are used for the days of the week not only in the Greek/Roman Western tradition, but also in Indian astrology. The Chinese /East Asian tradition uses the Sun, Moon, Fire, Water, Wood, Gold and Earth to name the seven days of the week. But this must have come after the 7 day week had already been established elsewhere. (For example, to name up to 10 days they could just have chosen to add days named for the Air, Beasts, Birds ….). Some languages use a numbering system and some use a mixture of all of the above. Rationalists and philosophers and dreamers have tried to shift to 5 and 6, and 8 and 10 day weeks but none of these efforts has managed to challenge the practicality or to dislodge the dominance of the seven-day week.

And now the whole world lives and marches – socially, culturally, politically – to the inexorable beat of the 7-day week.

The seven-day week must have started earlier than 5,000 years ago. We must distinguish, I think, between the need for first having such a period and then the selection of the number of days in such a period. The invention of names must have come after the selection of the number of days. The 7-day period must already have been in use before the Sumerians and the Babylonians got around to naming the days.

The need for such a period must have come in the Neolithic (c. 12,000 years ago) and after the advent of settlements with substantial populations (cities). Human hunter-gatherers (and even their forebears) would have followed the annual cycles and the seasons and would have been subject to the vagaries of weather. The availability of moonlight and the lunar cycle would have been important and well observed and was something well known by 50,000 years ago. But hunter-gatherers with their semi-nomadic life style lived in small groups of perhaps 30 or 40 and conceivably up to a hundred people. There would have been no great need for such groups to invent a sub-period of a season or a lunar cycle. The numbers would not have been large enough to warrant the invention of a “week” to help organise repetitive tasks. 

The Neolithic brought population density and specialisation. Carpenters and masons and spinners and weavers performed their specialities for many different projects simultaneously. The need to combine different specialist functions towards a goal was the new model of cooperation. Houses had to be built using a variety of specialists. Their labour needed to be planned and coordinated. I can imagine that the need to be able to plan work from different sources for the same day became critical. To be able to tell everyone to do something on a Thursday needed the Thursday to be invented. 

It seems obvious that increasing population density and specialisation generates the need for defining a “week”. But it does not answer the question of why 7 days? I can only speculate that human physiology comes into play. Physiology and nutrition of the time must have determined that labouring for 9 days of 10 was too much for the human frame and that resting one day in 5 was considered too idle. (Of course, nowadays with 2 rest days in 7 there is far more leisure time than with 1 in 5). I speculate also that the choice of an odd number of days (7) rather than six days comes from a need to define a mid-week day. I suspect the priests of that time had to have their say and therefore the day of rest was hijacked for worship and support of their temples. They probably could not overrule the economic necessities of the time and take over any of the other six days of labour. They still had a go, though, by naming some of the other days after their gods.

Perhaps the choice of 7 days was the first example of implementation of workers’ demands.


Administrative trivia

November 16, 2020

I had some difficulty finding posts I had written many years ago and realised that I had no search bar in the blog. A search bar has been added.

I had not realised that I could define an icon for the blog, which I have now done.

 


Happy Diwali

November 13, 2020

14th November 2020

diwali

Histories are always about justifying something in the present

November 8, 2020

Hardly a day goes by where I do not consider the origin of something. On some days I may ponder the origin of hundreds of things. It could be just curiosity or it could be to justify some current action or to decide upon some future action. Sometimes it is the etymology of a word or it could be the origins of an idea. It could be the story of what happened yesterday or something about my father or a thought about the origins of time. I know that when I seek the history of a place or a thing or a person, that what I get is just a story. In every case the story inevitably carries the biases of the story-teller. However, most stories are constrained by “evidence” though the point of the story may well lie in the narrative (inevitably biased) connecting the points of evidence.

The same evidence can generate as many stories as there are story-tellers. Often the narrative between sparse evidence forms the bulk of the story. When the past is being called upon to justify current or future actions, histories are invented and reinvented by playing with the narrative which lies between the evidence. Of course, the narrative cannot contain what is contradicted by the evidence. Human memory is always perception and perception itself is imperfect and varies. I know that the story I tell of some event in my own history changes with time. Thus the “history” I tell of all that lies between the “recorded facts” of my own existence is a variable and is a function of the “now”. It is experience and knowledge of the world and the people around us which provides the credibility for the stories which lie between the evidence. But bias plays its role here as well. A desired story-line is always more credible than one which is not.

A historian looks for the events which, incontrovertibly, took place. The further back events lie in the past the less evidence survives. But histories are never merely a tabulation of events with evidence (though even what constitutes incontrovertible evidence is not without controversy). The more there is evidence the more constrained is the inter-connecting narrative. But historians make their reputations on the stories they tell. Their histories are always a combination of evidenced events and the narrative connecting them. Historian bias is inevitable.

I have been writing a story – hardly a history – about my father’s early life and through the Second World War. For a period covering some 20 years I have documented evidence for about 30 separate events – dates when certain events occurred. The date he graduated, the date he joined up, the date he was promoted or the date he arrived somewhere. The documented events are, like all documented events, just events. If not this set of events then it would have been some other similar set of events. If not these particular dates then some other set of dates. The events are always silent about what his mood was or what he had for breakfast on the day of the event. They provide a fixed frame but the overwhelming bulk of my story is speculation about why and how he went from one event to the next. The story fits my understanding of how he was much later in his life. My story about his motivations and his behaviour are entirely speculation but always fit my central story-line. The documented events are just the bones on which to hang the flesh of my story. My story is not determined by the events. It is determined elsewhere but has to conform to the events.

Go back a little under 1,000 years and consider Genghis Khan. We have documentary evidence about the date he died but even his date of birth is speculation. Current histories vary according to whether the historian desires to describe a hero or a villain. Both can be hung upon the framework provided by the few documented events available. Go back another 1,000 years and even with the large (relatively) amount of evidence available about the Roman Empire, the range of speculation possible can justify the politics of any contemporary viewpoint.

And so it is with all histories. We claim that histories help us to understand the past and that this, in turn, helps us to choose our future actions. I am not so sure. The power of a history lies in the credibility of the narrative connecting the certain events. As with the story about my father, a history is not a narrative determined by the events. The narrative is determined by other imperatives but must conform to the events. I begin to think that we write (and rewrite) our histories, always in the present, and with our present understandings, to justify where we are or the choices we want to make. They are always a justification of something in the present.


“Random” is indistinguishable from Divine

November 2, 2020

“Why is there something rather than nothing?” is considered by some to be the most fundamental question in metaphysics, and by others to be an invalid question. The Big Bang, quantum mechanics, time, consciousness, and God are all attempts to answer this question. They all invoke randomness or chance or probabilistic universes to escape the First Cause Problem. Physics and mathematics cannot address the question. An implied God of Randomness is the cop-out for all atheists.

Stanford Encyclopedia of Philosophy

The Commonplace Thesis, and the close connection between randomness and chance it proposes, appears also to be endorsed in the scientific literature, as in this example from a popular textbook on evolution (which also throws in the notion of unpredictability for good measure):

scientists use chance, or randomness, to mean that when physical causes can result in any of several outcomes, we cannot predict what the outcome will be in any particular case. (Futuyma 2005: 225)

Some philosophers are, no doubt, equally subject to this unthinking elision, but others connect chance and randomness deliberately. Suppes approvingly introduces

the view that the universe is essentially probabilistic in character, or, to put it in more colloquial language, that the world is full of random happenings. (Suppes 1984: 27)

The scientific method is forced to introduce random into stories about the origin of time and causality and the universe and life and everything. Often the invocation of random is used to avoid any questions of Divine Origins. But random and chance and probability are all just commentaries about a state of knowledge. They are silent about causality or about Divinity. Random ought to be causeless. But that is pretense for such a random is outside our experience. The flip of a coin produces only one outcome. Multiple outcomes are not possible. The probability of one of several possible outcomes is only a measure of lack of knowledge. Particles with a probability of being in one place or another are also an expression of ignorance. However when it is claimed that a particle may be in two places simultaneously we encounter a challenge to our notion of identity for particle and for place. Is that just ignorance about location in time or do we have two particles or two places? Random collisions at the start of time are merely labels for ignorance. Invoking singularities which appear randomly and cause Big Bangs is also just an expression of ignorance.

Whenever science or the scientific method requires, or invokes, randomness or probability, it is about what we do not know. It says nothing about why existence must be. The fundamental question remains unaddressed “Why is there something rather than nothing?”

And every determinist or atheist, whether admitted to or not, believes in the God of Randomness. Everything Random is Unknown (which includes the Divine).


A rational dislike is never a phobia

October 28, 2020

It is no longer politically correct to have an irrational fear of anything. There have been politically correct, but rather cowardly, reporters on Swedish TV who have even questioned whether the French teacher who was beheaded by an Islamic terrorist did not bear some responsibility for his own death.

But when what is wrong is denied for the fear of being seen as a racist or Islamophobic, it is both irrational and cowardice.


Vaccine worship is almost as bad as anti-vax

October 18, 2020

Anti-vax may be utterly stupid but vaccine worship is not far behind.

Let us not forget the public health fiasco with swine influenza vaccine and narcolepsy. In October 2009, Sweden’s public health services carried out a mass vaccination program against swine influenza. Six million doses of GlaxoSmithKline’s H1N1 influenza vaccine Pandemrix were administered. The vaccine was approved for use by the European Commission in September 2009, upon the recommendations of the European Medicines Agency. By August 2010, both the Swedish Medical Products Agency (MPA) and the Finnish National Institute for Health and Welfare (THL) launched investigations regarding the development of narcolepsy as a side effect.

An increased risk of narcolepsy was found following vaccination with Pandemrix, a monovalent 2009 H1N1 influenza vaccine that was used in several European countries during the H1N1 influenza pandemic. This risk was initially found in Finland, and then other European countries also detected an association.

CDC

Today over 400 people of those vaccinated in Sweden suffer from narcolepsy.

Narcolepsy is a central nervous system disorder characterized by excessive daytime sleepiness (EDS) and abnormal manifestations of rapid eye movement (REM) sleep. This disorder is caused by the brain’s inability to regulate sleep-wake cycles normally. The condition is incurable and life long. Some treatments can help to alleviate symptoms. 

It is the same “experts” and institutions who decided on mass use of Pandemrix who are now inventing public health strategies for Covid-19. Meanwhile a vaccine for the coronavirus is still in its early stages of development and clinical trials. Some of these “expert” strategies are just fairy tales and fantasy.

Vaccinations generally work. Particular vaccinations sometimes don’t. Whether any particular vaccine against Covid-19 will work remains to be seen. The experience with other coronaviruses provides no track record which inspires great confidence.

I get worried when people say they believe in science. To be scientific is to be skeptical. If some science has to be believed in, then whatever it is that has to be believed, is not science.


The substance of leadership lies in behaviour not in style

October 16, 2020

I was recently invited by our local college (gymnasium) to give a lecture about my views on leadership. I was a little surprised that some of the questions were focused on the style of leadership rather than on substance. For example, styles are sometimes classified as being:

  1. empathic or
  2. visionary or
  3. coaching or
  4. commanding or
  5. driving or
  6. democratic.

Without the need for cooperation, the word “leader” is undefined. Without a leader the word “leadership” is undefined. For me, leadership is entirely about behaviour. This classification of styles is not, in fact, about what constitutes leadership or even about different kinds of leadership. It is merely a list of styles which is entirely superficial. It places  an undue emphasis on form rather than on substance; on the cosmetics of what leadership looks like rather than the fundamental behaviour involved. The use of “democratic” as a qualifier for a leadership style merely panders to a fashionable sense of political correctness and is inherently self-contradictory. The behaviour needed for leadership is no different whether in a monarchy or a democracy or a dictatorship.  The behaviour is no different whether in the military or in government or in the corporate world or in sport. 

I prefer my own definition of what a leader is.

“A leader is a person who behaves in such a manner as to induce the necessary behaviour from others, individually and collectively, towards a goal”

With this definition, the various behavioural styles above only describe particular facets of behavioural interactions between a leader and others. A leader has just two functions, which are necessary and sufficient:

  1. To create and establish goals, and
  2. to induce the behaviour necessary from others, individually and collectively, towards those goals.

Behavioural styles of a leader are then, and must be, as varied as may be necessary to induce the required behaviour from others. Depending upon the size of the group involved and their competence, a leader will need to use different styles to motivate and encourage different members. He is the conductor of an orchestra of behaviours. He may have to be a tyrant occasionally, a commander with some, show empathy with others, or be consultative with a few. The style in play may well vary with different leaders and different members. Behavioural style may vary over time or depending upon the prevailing external conditions. The so-called “democratic” style is really a very particular style of behaviour. It is useful, at times, in getting consensus – if consensus is what is needed – when dealing, for example, with an expert group where all members have very high levels of specialized competence. Group members have different roles and can vary widely in competence. A consensus of the incompetent is of no great value. Any leader who generally subordinates his behaviour to the consensus, or to a majority view to determine decisions, effectively abdicates leadership. A “democratic” leadership is inherently contradictory. You can have leadership in a democracy but not democracy within leadership. Any “style” of leadership behaviour must always be subordinated to the primary function of inducing the behaviour necessary from others to achieve a goal.

By considering the two components separately, it becomes much easier not only to assess people for leadership roles but also to tailor education and training to suit particular individuals.

  1. Can the individual envision, create and establish goals? 
  2. Can the individual get the necessary behaviour from others?

It then naturally follows that being visionary and having skills for strategy or planning or forecasting or communication will be beneficial for goal-setting. Similarly, it becomes obvious that people-skills, motivation, communication, inspiration and persuasion are beneficial for getting the required behaviour from others. It is, I have found, counter-productive to over-think and unnecessarily complicate the basic principles. 

Leadership is about the effectiveness of the leader’s behaviour. The empirical evidence of 200,000 years as modern humans is that a group with leadership is more effective than one without. Leadership is a vector quantity with both magnitude and direction. The direction comes from the creation and setting of goals and the magnitude is a measure of the “goodness” of the leadership which, in turn, is a measure of the competence of the leader to induce the required behaviour of others.

I do not claim that leadership is easy. But I do claim that the principles of leadership are simple and straightforward.  A leader must be able to create and establish goals and must then be able to induce the behaviour of others towards those goals. It is complex but it is not more complicated than that.


“What the pandemic has taught us about science” – Matt Ridley

October 14, 2020

Matt Ridley’s article in the Rational Optimist expresses much of my frustration with gullible journalists and the opportunism of the scientific fraternity (where over half are bean counters or clerks and don’t do any science) to exploit every funding opportunity. The money being thrown at Covid-19 research is far too tempting to expect that the charlatans will stay away. More than three hundred different projects being funded for developing a vaccine suggests that either we are so clueless that 300 different paths need to be pursued or that we have a number of fake projects being funded. I am not impressed when projects are funded to “study” how long the virus remains viable on mobile phones as opposed to plastic bags, or when the number of authors on Covid-19 papers are counted and “analysed”.

As Matt Ridley writes:

“…. peer review is often perfunctory rather than thorough; often exploited by chums to help each other; and frequently used by gatekeepers to exclude and extinguish legitimate minority scientific opinions in a field”.

His article is re-blogged here.

What the pandemic has taught us about science – Matt Ridley

The scientific method remains the best way to solve many problems, but bias, overconfidence and politics can sometimes lead scientists astray.

The Covid-19 pandemic has stretched the bond between the public and the scientific profession as never before. Scientists have been revealed to be neither omniscient demigods whose opinions automatically outweigh all political disagreement, nor unscrupulous fraudsters pursuing a political agenda under a cloak of impartiality. Somewhere between the two lies the truth: Science is a flawed and all too human affair, but it can generate timeless truths, and reliable practical guidance, in a way that other approaches cannot.

In a lecture at Cornell University in 1964, the physicist Richard Feynman defined the scientific method. First, you guess, he said, to a ripple of laughter. Then you compute the consequences of your guess. Then you compare those consequences with the evidence from observations or experiments. “If [your guess] disagrees with experiment, it’s wrong. In that simple statement is the key to science. It does not make a difference how beautiful the guess is, how smart you are, who made the guess or what his name is…it’s wrong.”

So when people started falling ill last winter with a respiratory illness, some scientists guessed that a novel coronavirus was responsible. The evidence proved them right. Some guessed it had come from an animal sold in the Wuhan wildlife market. The evidence proved them wrong. Some guessed vaccines could be developed that would prevent infection. The jury is still out.

Seeing science as a game of guess-and-test clarifies what has been happening these past months. Science is not about pronouncing with certainty on the known facts of the world; it is about exploring the unknown by testing guesses, some of which prove wrong.

Bad practice can corrupt all stages of the process. Some scientists fall so in love with their guesses that they fail to test them against evidence. They just compute the consequences and stop there. Mathematical models are elaborate, formal guesses, and there has been a disturbing tendency in recent years to describe their output with words like data, result or outcome. They are nothing of the sort.

An epidemiological model developed last March at Imperial College London was treated by politicians as hard evidence that without lockdowns, the pandemic could kill 2.2 million Americans, 510,000 Britons and 96,000 Swedes. The Swedes tested the model against the real world and found it wanting: They decided to forgo a lockdown, and fewer than 6,000 have died there.

In general, science is much better at telling you about the past and the present than the future. As Philip Tetlock of the University of Pennsylvania and others have shown, forecasting economic, meteorological or epidemiological events more than a short time ahead continues to prove frustratingly hard, and experts are sometimes worse at it than amateurs, because they overemphasize their pet causal theories.

A second mistake is to gather flawed data. On May 22, the respected medical journals the Lancet and the New England Journal of Medicine published a study based on the medical records of 96,000 patients from 671 hospitals around the world that appeared to disprove the guess that the drug hydroxychloroquine could cure Covid-19. The study caused the World Health Organization to halt trials of the drug.

It then emerged, however, that the database came from Surgisphere, a small company with little track record, few employees and no independent scientific board. When challenged, Surgisphere failed to produce the raw data. The papers were retracted with abject apologies from the journals. Nor has hydroxychloroquine since been proven to work. Uncertainty about it persists.

A third problem is that data can be trustworthy but inadequate. Evidence-based medicine teaches doctors to fully trust only science based on the gold standard of randomized controlled trials. But there have been no randomized controlled trials on the wearing of masks to prevent the spread of respiratory diseases (though one is now under way in Denmark). In the West, unlike in Asia, there were months of disagreement this year about the value of masks, culminating in the somewhat desperate argument of mask foes that people might behave too complacently when wearing them. The scientific consensus is that the evidence is good enough and the inconvenience small enough that we need not wait for absolute certainty before advising people to wear masks.

This is an inverted form of the so-called precautionary principle, which holds that uncertainty about possible hazards is a strong reason to limit or ban new technologies. But the principle cuts both ways. If a course of action is known to be safe and cheap and might help to prevent or cure diseases—like wearing a face mask or taking vitamin D supplements, in the case of Covid-19—then uncertainty is no excuse for not trying it.

A fourth mistake is to gather data that are compatible with your guess but to ignore data that contest it. This is known as confirmation bias. You should test the proposition that all swans are white by looking for black ones, not by finding more white ones. Yet scientists “believe” in their guesses, so they often accumulate evidence compatible with them but discount as aberrations evidence that would falsify them—saying, for example, that black swans in Australia don’t count.

Advocates of competing theories are apt to see the same data in different ways. Last January, Chinese scientists published a genome sequence known as RaTG13 from the virus most closely related to the one that causes Covid-19, isolated from a horseshoe bat in 2013. But there are questions surrounding the data. When the sequence was published, the researchers made no reference to the previous name given to the sample or to the outbreak of illness in 2012 that led to the investigation of the mine where the bat lived. It emerged only in July that the sample had been sequenced in 2017-2018 instead of post-Covid, as originally claimed.

These anomalies have led some scientists, including Dr. Li-Meng Yan, who recently left the University of Hong Kong School of Public Health and is a strong critic of the Chinese government, to claim that the bat virus genome sequence was fabricated to distract attention from the truth that the SARS-CoV-2 virus was actually manufactured from other viruses in a laboratory. These scientists continue to seek evidence, such as a lack of expected bacterial DNA in the supposedly fecal sample, that casts doubt on the official story.

By contrast, Dr. Kristian Andersen of Scripps Research in California has looked at the same confused announcements and stated that he does not “believe that any type of laboratory-based scenario is plausible.” Having checked the raw data, he has “no concerns about the overall quality of [the genome of] RaTG13.”

Given that Dr. Andersen’s standing in the scientific world is higher than Dr. Yan’s, much of the media treats Dr. Yan as a crank or conspiracy theorist. Even many of those who think a laboratory leak of the virus causing Covid-19 is possible or likely do not go so far as to claim that a bat virus sequence was fabricated as a distraction. But it is likely that all sides in this debate are succumbing to confirmation bias to some extent, seeking evidence that is compatible with their preferred theory and discounting contradictory evidence.

Dr. Andersen, for instance, has argued that although the virus causing Covid-19 has a “high affinity” for human cell receptors, “computational analyses predict that the interaction is not ideal” and is different from that of SARS, which is “strong evidence that SARS-CoV-2 is not the product of purposeful manipulation.” Yet, even if he is right, many of those who agree the virus is natural would not see this evidence as a slam dunk.

As this example illustrates, one of the hardest questions a science commentator faces is when to take a heretic seriously. It’s tempting for established scientists to use arguments from authority to dismiss reasonable challenges, but not every maverick is a new Galileo. As the astronomer Carl Sagan once put it, “Too much openness and you accept every notion, idea and hypothesis—which is tantamount to knowing nothing. Too much skepticism—especially rejection of new ideas before they are adequately tested—and you’re not only unpleasantly grumpy, but also closed to the advance of science.” In other words, as some wit once put it, don’t be so open-minded that your brains fall out.

Peer review is supposed to be the device that guides us away from unreliable heretics. A scientific result is only reliable when reputable scholars have given it their approval. Dr. Yan’s report has not been peer reviewed. But in recent years, peer review’s reputation has been tarnished by a series of scandals. The Surgisphere study was peer reviewed, as was the study by Dr. Andrew Wakefield, hero of the anti-vaccine movement, claiming that the MMR vaccine (for measles, mumps and rubella) caused autism. Investigations show that peer review is often perfunctory rather than thorough; often exploited by chums to help each other; and frequently used by gatekeepers to exclude and extinguish legitimate minority scientific opinions in a field.

Herbert Ayres, an expert in operations research, summarized the problem well several decades ago: “As a referee of a paper that threatens to disrupt his life, [a professor] is in a conflict-of-interest position, pure and simple. Unless we’re convinced that he, we, and all our friends who referee have integrity in the upper fifth percentile of those who have so far qualified for sainthood, it is beyond naive to believe that censorship does not occur.” Rosalyn Yalow, winner of the Nobel Prize in medicine, was fond of displaying the letter she received in 1955 from the Journal of Clinical Investigation noting that the reviewers were “particularly emphatic in rejecting” her paper.

The health of science depends on tolerating, even encouraging, at least some disagreement. In practice, science is prevented from turning into religion not by asking scientists to challenge their own theories but by getting them to challenge each other, sometimes with gusto. Where science becomes political, as in climate change and Covid-19, this diversity of opinion is sometimes extinguished in the pursuit of a consensus to present to a politician or a press conference, and to deny the oxygen of publicity to cranks. This year has driven home as never before the message that there is no such thing as “the science”; there are different scientific views on how to suppress the virus.

Anthony Fauci, the chief scientific adviser in the U.S., was adamant in the spring that a lockdown was necessary and continues to defend the policy. His equivalent in Sweden, Anders Tegnell, by contrast, had insisted that his country would not impose a formal lockdown and would keep borders, schools, restaurants and fitness centers open while encouraging voluntary social distancing. At first, Dr. Tegnell’s experiment looked foolish as Sweden’s case load increased. Now, with cases low and the Swedish economy in much better health than other countries, he looks wise. Both are good scientists looking at similar evidence, but they came to different conclusions.

Having proved a guess right, scientists must then repeat the experiment. Here too there are problems. A replication crisis has shocked psychology and medicine in recent years, with many scientific conclusions proving impossible to replicate because they were rushed into print with “publication bias” in favor of marginally and accidentally significant results. As the psychologist Stuart Ritchie of Kings College London argues in his new book, “Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science,” unreliable and even fraudulent papers are now known to lie behind some influential theories.

For example, “priming”—the phenomenon by which people can be induced to behave differently by suggestive words or stimuli—was until recently thought to be a firmly established fact, but studies consistently fail to replicate it. In the famous 1971 Stanford prison experiment, taught to generations of psychology students, role-playing volunteers supposedly chose to behave sadistically toward “prisoners.” Tapes have revealed that the “guards” were actually instructed to behave that way. A widely believed study, subject of a hugely popular TED talk, showing that “power posing” gives you a hormonal boost, cannot be replicated. And a much-publicized discovery that ocean acidification alters fish behavior turned out to be bunk.

Prof. Ritchie argues that the way scientists are funded, published and promoted is corrupting: “Peer review is far from the guarantee of reliability it is cracked up to be, while the system of publication that’s supposed to be a crucial strength of science has become its Achilles heel.” He says that we have “ended up with a scientific system that doesn’t just overlook our human foibles but amplifies them.”

At times, people with great expertise have been humiliated during this pandemic by the way the virus has defied their predictions. Feynman also said: “Science is the belief in the ignorance of experts.” But a theoretical physicist can afford such a view; it is not much comfort to an ordinary person trying to stay safe during the pandemic or a politician looking for advice on how to prevent the spread of the virus. Organized science is indeed able to distill sufficient expertise out of debate in such a way as to solve practical problems. It does so imperfectly, and with wrong turns, but it still does so.

How should the public begin to make sense of the flurry of sometimes contradictory scientific views generated by the Covid-19 crisis? There is no shortcut. The only way to be absolutely sure that one scientific pronouncement is reliable and another is not is to examine the evidence yourself. Relying on the reputation of the scientist, or the reporter reporting it, is the way that many of us go, and is better than nothing, but it is not infallible. If in doubt, do your homework.