Posts Tagged ‘Science’

Is the Principle of Least Resistance the Zeroth Law of Being?

June 22, 2025

The underlying compulsion

Is thrift, parsimony, a sort of minimalism, part of the fabric of the universe?

Occam’s razor (known also as the principle of parsimony) is the principle that when presented with alternative explanations for the same phenomenon, the explanation that requires the fewest assumptions should be selected. While Occam’s razor is about how to think and describe phenomena, I am suggesting that parsimony of action, the path of least resistance is deeply embedded in causality and in all of existence.

Why is there something rather than nothing? Why does the universe exist? The answer is all around us. Because it is easier to be than not to be. Because at some level, in some dimension, in some domain of action and for some determining parameter, there is a greater resistance or opposition to not being than to being. Why does an apple fall from a tree? Because there is, in the prevailing circumstances, more resistance to it not falling than in falling. At one level this seems – and is – trivial. It is self-evident. It is what our common-sense tells us. It is what our reason tells us. And it is true.

It also tells us something else. If we are to investigate the root causes of any event, any happening, we must investigate the path by which it happened and what was the resistance or cost that was minimised. I am, in fact, suggesting that causality requires that the path of sequential actions is – in some domain and in some dimension – a thrifty path.

A plant grows in my garden. It buds in the spring and by winter it is dead. It has no progeny to appear next year. Why, in this vast universe, did it appear only to vanish, without having any noticeable impact on any other creature, god, or atheist? Some might say it was chance, others that it was the silent hand of a larger purpose. But I suspect the answer is simpler but more fundamental. The plant grew because it was “easier”, by some definition for the universe, that it grow than that it not grow. If it had any other option, then that must have been, by some measure, more expensive, more difficult.

In our search for final explanations – why the stars shine, why matter clumps, why life breathes – we often overlook a red thread running through them all. Wherever we look, things tend to happen by the easiest possible route available to them. Rivers meander following easier paths and they always flow downhill, not uphill. Heat flows from warm to cold because flowing the other way needs effort and work (refrigerator). When complexity happens it must be that in some measure, in some domain, staying simple faces more resistance than becoming complex. How else would physics become chemistry and form atoms and molecules? Why else would chemistry become biochemistry with long complex molecules? Something must have been easier for biology and life to be created than to not come into being. The bottom line is that if it was easier for us not to be, then we would not be here. Even quantum particles, we are told, “explore” every possible path but interfere in such a way that the most probable path is the one of least “action”. This underlying parsimony – this preference for least resistance – might well deserve to be raised to a status older than any law of thermodynamics or relativity. It might be our first clue as to how “being” itself unfurls. But is this parsimony really a universal doctrine or just a mirage of our imperfect perception? And if so, how far does it reach?

We can only elucidate with examples. And, of course, our examples are limited to just that slice of the universe that we can imperfectly perceive with all our limitations. Water finds the lowest point (where lowest means closest to the dominant gravitational object in the vicinity). Light bends when it moves from air into glass or water, following the path that takes the least time. Time itself flows because it is easier that it does than it does not. A cat, given the choice between a patch of bare floor and a soft cushion, unfailingly selects the softer path. It may seem far-fetched, but it could be that the behaviour of the cat and the ray of light are not just related, they are constrained to be what they are. Both are obeying the same hidden directive to do what costs the least effort, to follow a path of actions presenting the least resistance; where the minimisation of effort could be time, or energy, or discomfort, or hunger, or something else.

In physics, this underlying compulsion has been proposed from time to time. The Principle of Least Action, in physics, states that a system’s trajectory between two points in spacetime is the one that minimizes a quantity called the “action”. Action, in this context, is a quantity that combines energy, momentum, distance, and time. Essentially, the universe tends towards the path of least resistance and least change. Newton hinted at it; Lagrange and Hamilton built it into the bones of mechanics. Feynman has a lecture on it. The principle suggests that nature tends to favor paths that are somehow “efficient” or require minimal effort, given the constraints of the system. A falling apple, a planet orbiting the Sun, a thrown stone: each follows the path which, when summed over time, minimizes an abstract quantity called “action”. In a sense, nature does not just roll downhill; it picks its way to roll “most economically”, even if the actual route curves and loops under competing forces. Why should such a principle apply? Perhaps the universe has no effort to waste – however it may define “effort” – and perhaps it is required to be thrifty.

The path to life can be no exception

Generally the path of least resistance fits with our sense of what is reasonable (heat flow, fluid flow, electric current, …) but one glaring example is counter-intuitive. The chain from simple atoms to molecules to complex molecules to living cells to consciousness seems to be one of increasing complexity and increasing difficulty of being. One might think that while water and light behave so obligingly, living things defy the common-sensical notion that simple is cheap and complex is expensive. Does a rainforest  – with its exuberant tangle of vines, insects, poisons, and parasites  – look like a low-cost arrangement? Isn’t life an extremely expensive way just to define and find a path to death and decay?

Living systems, after all, locally do reduce entropy, they do build up order. A cell constructs a complicated molecule, seemingly climbing uphill against the universal tendency for things to spread out and decay. But it does so at the expense of free energy in its environment. The total “cost”, when you add up the cell plus its surroundings, still moves towards a cheaper arrangement overall and is manifested as a more uniform distribution of energy, more heat deposited at its lowest temperature possible. Life is the achieving of local order paid for by a cost reckoned as global dissipation. Fine, but one might still question as to why atoms should clump into molecules and molecules into a cell. Could it ever be “cheaper” than leaving them separate and loose? Shouldn’t complex order be a more costly state than simple disorder? In a purely static sense, yes. But real molecules collide, bounce, and react. Some combinations, under certain conditions, lock together because once formed they are stable, meaning it costs “more” to break them apart than to keep them together. Add some external driver – say a source of energy, or a catalyst mineral surface, or a ray of sunlight – and what might have stayed separate instead finds an easier path to forming chains, membranes, and eventually a primitive cell. Over time, any accessible path that is easier than another will inevitably be traversed.

Chemistry drifts into biochemistry not by defying ease, but by riding the easiest local, available pathway. It is compulsion rather than choice. Action is triggered by the availability of the pathway and that is always local. Evolution then – by trial and error – makes the rough first arrangement into a working organism. Not a perfectly efficient or excellent organism in some cosmic sense, but always that which is good enough and the easiest achievable in that existential niche, at that time. One must not expect “least resistance” to provide a  perfection which is not being sought. A panda’s thumb is famously clumsy – but given the panda’s available ancestral parts, it was easier to improvise a thumb out of a wrist bone than to grow an entirely new digit. Nature cuts corners when it is cheaper than starting over.

Perhaps the reason why the spark of life and the twitch of consciousness evade explanation is that we have not yet found – if at all we are cognitively capable of finding – the effort that is being minimised and in which domain it exists. We don’t know what currency the universe uses and how this effort is measured. Perhaps this is a clue as to how we should do science or philosophy at the very edges of knowledge. Look for what the surroundings would see as parsimony, look for the path that was followed and what was minimised. Look for the questions to which the subject being investigated is the answer. To understand what life is, or time or space, or any of the great mysteries we need to look for the questions which they are the answers to.

Quantum Strangeness: The Many Paths at Once

Even where physics seems most counter-intuitive, the pattern peeks through. In quantum mechanics, Richard Feynman’s path integral picture shows a particle “trying out” every possible trajectory. In the end, the most likely path is not a single shortest route but the one where constructive interference reinforces paths close to the classical least-action line. It also seems to me – and I am no quantum physicist – that a particle may similarly tunnel through a barrier, apparently ignoring the classical impossibility. Yet this too follows from the same probability wave. The path of “least resistance” here is not some forbidden motion but an amplitude that does not drop entirely to zero. What is classically impossible becomes possible at a cost which is a low but finite probability. Quantum theory does not invalidate or deny the principle. It generalizes it to allow for multiple pathways, weighting each by its cost in whatever language of probability amplitudes that the universe deals with.

It is tempting to try and stretch the principle to explain everything, including why there is something rather than nothing. Some cosmologists claim the universe arose from “quantum nothingness”, with positive energy in matter perfectly balanced by negative energy in gravity. On paper, the sum is zero and therefore, so it is claimed, no law was broken by conjuring a universe from an empty hat. But this is cheating. The arithmetic works only within an existing framework. After all quantum fields, spacetime, and conservation laws are all “something”. To define negative gravitational energy, you need a gravitational field and a geometry on which to write your equations. Subtracting something from itself leaves a defined absence, not true nothingness.

In considering true nothingness – the ultimate, absolute void (uav) – we must begin by asserting that removing something from itself cannot create this void. Subtracting a thing from itself creates an absence of that thing alone. Subtracting everything from itself may work but our finite minds can never encompass everything. In any case the least resistance principle means that from a void the mathematical trick of creating something here and a negative something there and claiming that zero has not been violated is false (as some have suggested with positive energy and negative gravity energy). That is very close to chicanery. To create something from nothing demands a path of least resistance be available compared to continuing as nothing. To conjure something from nothing needs not only a path to the something, but also a path to the not-something. Thrift must apply to the summation of these paths otherwise the net initial zero would prevail and continue.

The absolute void, the utter absence of anything, no space, no time, no law, is incomprehensible. From here we cannot observe any path, let alone one of lower resistance, to existence. Perhaps the principle of least resistance reaches even into the absolute zero of the non-being of everything. But that is beyond human cognition to grasp.

Bottom up not top down

Does nature always find the easiest, global path? Perhaps no, if excellence is being sought. But yes, if good enough is good enough. And thrift demands that nature go no further than good enough. Perfect fits come about by elimination of the bad fits not by a search for excellence. Local constraints can trap a system in a “good enough” state. Diamonds are a textbook example. They are not the lowest-energy form of carbon at the Earth’s surface, graphite is. Graphite has a higher entropy than diamond. But turning diamond into graphite needs an improbable, expensive chain of atomic rearrangements. So diamonds persist for eons because staying diamond is the path of least immediate, local resistance. But diamonds will have found a pathway to graphite before the death of the universe. The universe – and humans – act locally. What is global follows as a consequence of the aggregation, the integral, of the local good enough paths.

Similarly, evolution does not look for, and does not find, the perfect creature but only the one that survives well enough. A bird might have a crooked beak or inefficient wings, but if the cost of evolving a perfect version is too high or requires impossible mutations, the imperfect design holds. A local stability and a local expense to disturb that stability removes a more distant economy from sight.

Thus, the principle is best to be stated humbly. Nature slides to the lowest, stable, accessible valley in the landscape it can actually access, not necessarily the deepest valley available.

A Zeroth Law or just a cognitive mirage

What I have tried to articulate here is an intuition. I intuit that nature, when presented with alternatives is required to be thrifty, to not waste what it cannot spare. This applies for whatever the universe takes to be the appropriate currency – whether energy, time, entropy, or information. In every domain where humans have been able to peek behind the curtain, the same shadow of a bias shimmers. The possible happens, the costliest is avoided, and the impossible stays impossible because the resistance is infinite. In fact the shadow even looks back at us if we pretend to observe from outside and try and lift the curtain of why the universe is. It must apply to every creation story. Because it was cheaper to create the universe than to continue with nothingness.

It may not qualify as a law. It is not a single equation but a principle of principles. It does not guarantee simplicity or beauty or excellence. Nature is perfectly happy with messy compromises provided they are good enough and the process the cheapest available. It cannot take us meaningfully to where human cognition cannot go, but within the realm of what we perceive as being, it might well be the ground from which more specific laws sprout. Newtons Laws of motion, Einstein’s relativity, Maxwell’s equations and even the Schrödinger equation, I postulate, are all expressions of the universe being parsimonious.

We can, at least, try to define it: Any natural process in our universe proceeds along an accessible path that, given its constraints, offers the least resistance compared to other possible paths that are accessible.

Is it a law governing existence? Maybe. Just as the little plant in my garden sprouted because the circumstances made it the easiest, quietest, cheapest path for the peculiar combination of seeds, soil, sunlight, and moisture that came together by chance. And in that small answer, perhaps, lies a hint for all the rest. That chance was without apparent cause. But, that particular chance occurred because it was easier for the universe – not for me or the plant – that it did so than that it did not. But it it is one of those things human cognition can never know.


The Great Mysteries: Known, Knowable, and Unknowable Foundations of Philosophy

April 16, 2025

The Great Mysteries: Known, Knowable, and Unknowable Foundations of Philosophy

Humanity’s pursuit of understanding is shaped by enduring questions – the Great Mysteries of existence, time, space, causality, life, consciousness, matter, energy, fields, infinity, purpose, nothingness, and free will. These enigmas, debated from ancient myths to modern laboratories, persist because of the inescapable limits of our cognition and perception. Our brains, with their finite 86 billion neurons, grapple with a universe of unfathomable complexity. Our senses – sight, hearing, touch – perceive only a sliver of reality, blind to ultraviolet light, infrasound, or phenomena beyond our evolutionary design. We cannot know what senses we lack, what dimensions or forces remain invisible to our biology. The universe, spanning an observable 93 billion light-years and 13.8 billion years, appears boundless, hiding truths beyond our reach. Together, these constraints – finite brain, limited senses, unknown missing senses, and an apparently boundless universe – render the unknowable a fundamental fact, not a mere obstacle but a cornerstone of philosophical inquiry.

Knowing itself is subjective, an attribute of consciousness, not a separate mystery. To know – the sky is blue, 2+2=4 – requires a conscious mind to perceive, interpret, and understand. How we know we know is contentious, as reflection on knowledge (am I certain?) loops back to consciousness’s mystery, fraught with doubt and debate. This ties knowing to the unknowable: if consciousness limits what and how we know, some truths remain beyond us. Philosophy’s task is to acknowledge this, setting initial and boundary conditions – assumptions – for endeavors like science or ethics. The unknowable is the philosophy of philosophy, preventing us from chasing mirages or clutching at straws. The mysteries intertwine – existence needs time’s flow, space grounds physical being, causality falters at its first cause, consciousness shapes knowing – luring us with connections that reveal little. We classify knowledge as known (grasped), knowable (graspable), and unknowable (ungraspable), rooted in consciousness’s limits. Ignoring this, philosophers and physicists pursue futile absolutes, misled by the mysteries’ web. This essay explores these enigmas, their links, and the necessity of grounding philosophy in the unknowable.

I. The Tripartite Classification of Knowledge

Knowledge, an expression of consciousness, divides into known, knowable, and unknowable, a framework that reveals the Great Mysteries’ nature. The known includes verified truths – facts like gravity’s pull or DNA’s structure – established through observation and reason. These are humanity’s achievements, from Euclid’s axioms to quantum theory. The knowable encompasses questions within potential reach, given new tools or paradigms. The origin of life or dark energy’s nature may yield to inquiry, though they challenge us now. The unknowable marks where our finite nature – biological, sensory, existential – sets impassable limits.

The unknowable stems from our constraints. Our brains struggle with infinite regress or absolute absence, bound by their finite capacity. Our senses capture visible light, not gamma rays; audible sound, not cosmic vibrations. We lack senses for extra dimensions or unseen forces, ignorant of what we miss. The universe, vast and expanding, hides realms beyond our cosmic horizon or before the Big Bang’s earliest moments (~10^-43 seconds). This reality – finite cognition, limited perception, unknown sensory gaps, boundless cosmos – makes it inevitable that some truths are inaccessible to us. We are embedded in time, space, and existence, unable to view them externally. Philosophy’s task is to recognize these limits, setting assumptions that ground endeavors. Ignoring the unknowable risks mirages – false promises of answers where none exist – leaving us clutching at straws instead of building knowledge.

II. The Great Mysteries: A Catalog of the Unknowable

The Great Mysteries resist resolution, their unknowability shaping the assumptions we must make. Below, I outline each, situating them in the tripartite framework, then explore their interconnected web, which lures yet confounds us.

Existence: Why Is There Something Rather Than Nothing?

Existence’s origin, from Leibniz to Heidegger, remains a foundational enigma. The known includes observable reality – stars, particles, laws – but why anything exists is unclear. Reason tells us that existence must be because it is compelled to be so, but what those compulsions might be defies our comprehension. There must have been some prior condition which made it “easier” for there to be existence than not. The knowable might include quantum fluctuations sparking the Big Bang, yet these assume causality and time. The unknowable is the ultimate “why,” demanding a perspective outside existence, impossible for us. Metaphysicians chasing a final cause risk mirages, assuming an answer lies within reach, when philosophy must set existence as an unprovable starting point.

Time: What Is Its True Nature?

Time governs not only life, but the existence of anything. Yet its essence eludes us. We observe some of its effects – clocks, seasons – and the knowable includes relativity’s spacetime or quantum time’s emergence. But is time linear, cyclic, or illusory? Its subjective “flow” defies capture. To know time, we’d need to transcend it, beyond temporal beings. Ancient eternal gods and block-time models falter, pursuing clarity where philosophy must assume time’s presence, not its essence. The unidirectional arrow of time just is. Brute fact which neither allows nor permits any further penetration.

Space: What Is Its Fundamental Reality?

Space, reality’s stage, seems familiar but confounds. We know its measures – distances, volumes – and the knowable includes curved spacetime or extra dimensions. But what space is – substance, relation, emergent – remains unknowable. Why three dimensions, enabling physical existence (stars, bodies), not two or four? We cannot exit space to see its nature, and Planck-scale probes (~10^-35 meters) elude us. Cosmologies from Aristotle to multiverses assume space’s knowability, risking straw-clutching when philosophy must posit space as a given.

Causality: Does Every Effect Have a Cause?

Causality drives science, yet its scope is unproven. We know cause-effect patterns – stones fall, reactions occur – and the knowable might clarify quantum indeterminacy. But is causality universal or constructed? The first cause – what sparked existence – remains sidestepped, with science starting a little after the Big Bang and philosophy offering untestable gods or regresses. To know causality’s reach, we’d need to observe all events, which is impossible. Thinkers like Hume assume its solvability, ignoring that philosophy must treat causality as an assumption, not a truth.

Life: What Sparks Its Emergence?

Life’s mechanisms – DNA, evolution – are known, and abiogenesis may be knowable via synthetic biology. We search for where the spark of life may have first struck but we don’t know what the spark consists of. Why matter becomes “alive,” or life’s purpose, is unknowable. And as long as we don’t know, those who wish to can speculate about souls. Animists saw spirits, biologists study chemistry, yet both chase a threshold beyond perception. Assuming life’s knowability risks mirages; philosophy grounds biology by positing life as an empirical phenomenon, not explaining its essence.

Consciousness: Why Do We Experience?

Consciousness, where knowing resides, is our core mystery. We know neural correlates; the knowable includes mapping them. But why processes yield experience – the hard problem – is unknowable, as consciousness cannot access others’ qualia or exit itself. How we know we know – certainty, doubt – is contentious, from Plato’s beliefs to Gettier’s challenges, tying knowing’s subjectivity to consciousness’s limits. Seeking universal theories risks mirages; philosophy assumes consciousness as given.

Matter, Energy, Fields: What Are They Fundamentally?

Matter, energy, and fields are known via models—atoms, quanta, waves. Every model uses initial and boundary conditions which, themselves, can not be addressed. The knowable includes quantum gravity. But their essence—what they are—may be unknowable. What is the stuff of the fundamental particles. Are fields real or fictions? Atomists to string theorists chase answers, but Planck-scale realities defy us. Assuming a final ontology risks mirages; philosophy sets these as frameworks, not truths.

Infinity: Can We Grasp the Boundless?

Infinity, the uncountable, defies intuition. It is a placeholder for the incomprehensible. We know mathematical infinities (Cantor’s sets) and use them; the knowable might clarify physical infinity (space’s extent). But infinity’s reality or role is unknowable—our finite minds falter at boundlessness, paradoxes (Zeno’s) persist. Mathematicians seeking proofs assume too much; philosophy posits infinity as a tool, not a fact.

Purpose: Does Existence Have Meaning?

Purpose shapes ethics and religion, yet is unproven. We know human meanings (values); the knowable might include evolutionary drives. But cosmic purpose – existence’s “for” – is unknowable, needing intent we cannot access. Existentialists and theologians project meaning, risking straws; philosophy assumes purpose as human, not universal. What compelled the Big Bang? or the existence of the universe? Was that some deeper Law of Nature? A Law of the Super-Nature?

Nothingness: What Is Absolute Absence?

Nothingness probes “nothing.” We know quantum vacuums fluctuate; the knowable might explore pre-Big Bang states. But true nothingness – absence of all – is unknowable, as we exist in “something.” To have something the framework of existence must be present and if then something is removed do we get to nothingness or are we left with the space of existence? With numbers we cannot derive zero except by subtracting one from one. But without something how do we even conceptualise nothing? Can nothingness only be defined by first having something? Parmenides and physicists assume answers, but philosophy must posit somethingness as our starting point.

Free Will: Are We Truly Free?

Free will grounds morality, yet is unclear. We know brain processes; the knowable includes mapping agency. But freedom versus determinism is unknowable – we cannot isolate uncaused acts or escape causality. Augustine to Dennett chase clarity, but philosophy assumes will as a practical condition, not a truth.

Perplexing Connections: A Web of Mirages

The mysteries intertwine, with knowing, as consciousness’s attribute, weaving through their links luring us toward insight yet leading nowhere. Existence and time are inseparable – being requires change which in turn needs time to flow. But what is the time and what does it flow through? Physical existence demands three-dimensional space – real things (quarks, trees) occupy it, unlike abstractions – yet why three dimensions, not two or four, baffles us. Causality binds these, an empirical fact – events follow causes – but the first cause, existence’s spark, is dodged, leaving a void.

  • Existence and Time: Existence implies dynamism; a timeless “something” feels unreal. Heraclitus tied being to flux, physics links time to entropy. But why existence exists loops to when it began, and time’s flow loops to existence’s cause. Our finite brains grasp sequences, not sources; senses see motion, not time’s essence; the boundless universe hides time’s start, if any. Philosophers like Kant (time as intuition) chase answers, but the link reveals only our limits, demanding we assume both as givens.
  • Space and Existence: Physical things require 3D space – a stone needs place, a star volume. Two dimensions lack depth for matter, four defy perception (a 4D “shadow” needs unimaginable light). Why 3D? Our embeddedness in space blocks an external view, senses miss other dimensions, and the cosmos may conceal alternatives. Descartes (space as extension) assumes knowability, but philosophy must posit 3D space as a condition, not explain it.
  • Causality’s Role: Causality stitches existence, time, space—events unfold in spacetime, caused by priors. Yet, the first cause – what began it? – is sidestepped. Science can only go back to a little after the Big Bang, philosophy offers gods or regresses, neither testable. Our observations halt at Planck scales, logic breaks at uncaused causes. Russell (“universe just is”) assumes closure, but causality’s origin remains an assumption, not a truth. Referring to a brute fact is the sure sign of having reached the unknowable.
  • Consciousness and Knowing: Knowing is consciousness’s act – perceiving, understanding, reflecting. How we know we know – certainty’s test – is debated, as consciousness doubts itself (Gettier, skeptics). This links all mysteries: existence’s why, time’s flow, space’s form depend on conscious knowing, subjective and limited, making their truths elusive.

These connections form a circular web – knowing needs consciousness, existence needs time, time needs space, space needs causality, causality needs existence – each leaning on others without a base we can reach. They tantalize, suggesting unity, but lead to mirages, as our finite minds cannot break the loop, our senses see only 3D, temporal projections, and the universe hides broader contexts. Ignoring this, thinkers pursue the web’s threads, clutching at straws when philosophy’s role is to set boundaries, not chase illusions.

III. The Futility of Overreaching

The Great Mysteries, interwoven, persist as unknowable, yet many refuse to see this. Philosophers debate existence or space’s nature, assuming logic captures them, blind to unprovable foundations. Neuroscientists claim consciousness will yield to scans, ignoring qualia’s gap. Physicists seek a Theory of Everything, presuming space, causality, matter have final forms, despite unreachable scales. The mysteries’ web fuels this folly—links like existence-time or causality-space suggest a solvable puzzle. But chasing these leads to mirages, as circularity traps us—time explains existence, space grounds causality, none stand alone.

This stems from assuming all is knowable. Science’s successes—vaccines, satellites—imply every question yields. Yet, the unknowable is philosophy’s guardrail. Without it, endeavors falter, like metaphysicians seeking existence’s cause or physicists probing causality’s origin, grasping at straws. Ancient skeptics like Pyrrho saw uncertainty’s value, grounding thought in limits, while modern thinkers often reject it, misled by the web’s false promise.

IV. Grounding Philosophy in the Unknowable

Acknowledging the unknowable is philosophy’s practical task, setting assumptions for science, ethics, art. It prevents chasing mirages, ensuring endeavors rest on firm ground:

  • Science: Assume space’s 3D frame, time’s flow, causality’s patterns, pursuing testable models (spacetime’s curve, life’s origin), not essences (space’s being, first causes).
  • Philosophy: Posit consciousness, free will as conditions for ethics, not truths to prove, avoiding loops to existence or causality.
  • Culture: Embrace mysteries in art, myth, as ancients did, using their web – time’s flow, space’s stage –  to inspire, not solve.

For example, DNA’s structure (known) and abiogenesis (knowable) advance biology, while life’s purpose is assumed, not chased. Space’s measures aid cosmology, its 3D necessity a starting point, not an answer.

V. Conclusion

The Great Mysteries – existence, time, space, causality, life, consciousness, matter, energy, fields, infinity, purpose, nothingness, free will – endure because our finite brains, limited senses, unknown missing senses, and boundless universe make the unknowable a fact. Their web – existence flowing with time, space enabling reality, causality faltering at its origin – lures but leads to mirages, circular and unresolvable. Ignoring this, philosophers and physicists chase straws, misled by false clarity. The unknowable is philosophy’s foundation, setting assumptions that ground endeavors. By embracing it, we avoid futile quests, building on the known and knowable while marveling at the mysteries’ depth, our place within their vast, unanswerable weave.


Related:

Knowledge is not finite and some of it is unknowable

https://www.forbes.com/sites/startswithabang/2016/01/17/physicists-must-accept-that-some-things-are-unknowable/#6d2c5834ae1a

https://ktwop.com/2018/08/21/when-the-waves-of-determinism-break-against-the-rocks-of-the-unknowable/

https://ktwop.com/2017/10/17/the-liar-paradox-can-be-resolved-by-the-unknowable/

Physics cannot deal with nothingness


“Dark oxygen” discovery probably more junk science

January 19, 2025

Well! Well!

Another scientific myth bites the dust. But never believe anything which feels compelled to use the word “dark”.

BBC

Scientists who recently discovered that metal lumps on the dark seabed make oxygen, have announced plans to study the deepest parts of Earth’s oceans in order to understand the strange phenomenon. Their mission could “change the way we look at the possibility of life on other planets too,” the researchers say. The initial discovery confounded marine scientists. It was previously accepted that oxygen could only be produced in sunlight by plants – in a process called photosynthesis.

But I am extremely sceptical of all “dark” things. Dark energy and dark matter are fudge factors and were never even claimed to be real things. Now, even the need for the fudge factor is vanishing. I suspect dark oxygen may also turn out to be just another example of junk science.

Evidence of dark oxygen production at the abyssal seafloor

Abstract
Deep-seafloor organisms consume oxygen, which can be measured by in situ benthic chamber experiments. Here we report such experiments at the polymetallic nodule-covered abyssal seafloor in the Pacific Ocean in which oxygen increased over two days to more than three times the background concentration, which from ex situ incubations we attribute to the polymetallic nodules. Given high voltage potentials (up to 0.95 V) on nodule surfaces, we hypothesize that seawater electrolysis may contribute to this dark oxygen production.

Of course there are many who claim this is a nonsense discovery. If photosynthesis is not the only way of producing oxygen and it can actually be produced in the depths of the ocean, then microbial life is not just possible but likely on the deep ocean floor. That could allow fanatic environmentalists (like the ones who caused the LA fires) to disturb the potential mining of metals (rare metals especially).

BBC

The initial discovery triggered a global scientific row – there was criticism of the findings from some scientists and from deep sea mining companies that plan to harvest the precious metals in the seabed nodules. If oxygen is produced at these extreme depths, in total darkness, that calls into question what life could survive and thrive on the seafloor, and what impact mining activities could have on that marine life. That means that seabed mining companies and environmental organisations – some of which claimed that the findings provided evidence that seafloor mining plans should be halted – will be watching this new investigation closely.

I find the criticisms of dark oxygen much more credible than the discovery paper by Sweetman.

Critical Review of the Article: “Evidence of Dark Oxygen Production at the Abyssal Seafloor” by Sweetman et al. in Nat. Geosci. 1–3 (2024)

This review examines the findings and methodologies presented in Sweetman et al. (2024) (hereafter referred to as ‘the paper’). The paper presents findings contrasting those of all previous comparable work and has stirred international debate pertaining to deep-sea minerals. We identify significant issues in data collection, validation, and interpretation including unvalidated data collection methods, the omission of crucial observations relevant for electrolysis processes, and unsupported voltage measurements which undermine the study’s conclusions. These issues, coupled with unfounded hypotheses about early Earth oxygen production, call into question the authors’ interpretation of the observations and warrant re-examining the validity of this work. 

Dark oxygen sounds more like junk science and funding hype than any real discovery.


Science ultimately needs magic to build upon

January 3, 2025

The purpose of the scientific method is to generate knowledge. “Science” describes the application of the method and the knowledge gained. The knowledge generated is always subjective and the process builds upon fundamental assumptions which make up the boundary conditions for the scientific method. These  assumptions can neither be explained or proved.


I find it useful to take knowledge as coming in 3 parts.

  1. known: This encompasses everything that we currently understand and can explain through observation, experimentation, and established theories. This is the realm of established scientific knowledge, historical facts, and everyday experiences.
  2. unknown but knowable: This is the domain of scientific inquiry. It includes phenomena that we don’t currently understand but that we believe can be investigated and explained through scientific methods. This is where scientific research operates, pushing the boundaries of our knowledge through observation, experimentation, and the development of new theories.
  3. unknown and unknowable: This is the realm that I associate with metaphysics, religion and theology. It encompasses questions about ultimate origins, the meaning of existence, the nature of consciousness, and other metaphysical questions that may not be amenable to scientific investigation.

Philosophy then plays the crucial role of exploring the boundaries between these domains, challenging the assumptions, and developing new ways of thinking about knowledge and reality.

I like this categorization of knowledge because

  • it provides a clear framework for distinguishing between different types of questions and approaches to understanding.
  • it acknowledges the limits of scientific inquiry and recognizes that there may be questions that science cannot answer, and
  • it allows for the coexistence of science, philosophy, religion, and other ways of knowing, each addressing different types of questions.

To claim any knowledge about the unknown or the unknowable leads inevitably to self-contradiction. Which is why the often used form “I don’t know what, but I know it isn’t that” is always self contradictory. It implies a constraint on the unknown, which is a contradiction in terms. If something is truly unknown, we surely cannot even say what it is not.

Given that the human brain is finite and that we cannot observe any bounds to our universe – in space or in time – it follows that there must be areas beyond the comprehension of human cognition. We invent labels to represent the “unknowable” (boundless, endless, infinite, timeless, supernatural, magic, countless, ….). These labels are attempts to conceptualize what is inherently beyond our conceptualization. They serve as placeholders for our lack of understanding. But it is the human condition that having confirmed that there are things we cannot know, we then proceed anyway to try and define what we cannot. We are pattern-seeking beings who strive to make sense of the world around us. Even when faced with the limits of our understanding, we try to create mental models, however inadequate they may be.

Human cognitive capability is limited not only by the brain’s physical size but also by the senses available to us. We know about some of the senses we lack (e.g., the ability to detect magnetic fields like some birds or to perceive ultraviolet light directly like some insects), but cannot know what we don’t know. We cannot even conceive of what other senses we might be missing. These are the “unknown unknowns,” and they represent a fundamental limit to our understanding of reality. Even our use of instruments to detect parameters we cannot sense directly must be interpreted by the senses we do have. We convert X-rays into images in the visible spectrum, or we represent radio waves as audible sounds. This conversion necessarily involves interpretation and introduces subjectivity. We also know that the signals generated by an animal’s eye probably cannot be understood by a human brain. The brain’s software needs to be tuned for the senses the brain has access to. The inherent limitations of human perception makes the subjective nature of our experience of reality unavoidable. The objectivity of all human observations is thus a mirage. Empiricism is necessarily subjective.

Scientific inquiry remains the most powerful tool humans have developed for understanding the world around us. With sophisticated instruments to extend our limited senses and by using conceptual tools such as mathematics and logic and reason we gain insights into aspects of reality that would otherwise be inaccessible to us. Never mind that logic and reason are not understood in themselves. But our experience of reality is always filtered through the lens of our limited and species-specific senses. We cannot therefore eliminate the inherent subjectivity of our observations and the limitations of our understanding. We cannot know what we cannot know.

I do not need to invoke gods when I say that “magic” exists, when I define “magic” as those things beyond human comprehension. This definition avoids superpower connotations and focuses on the limits of our current knowledge. In this sense, “magic” is a placeholder for the unknown. I observe that the process of science requires fundamental assumptions which are the boundary conditions within which science functions. These assumptions include:

  • Existence of an External Reality: Science assumes that there is an objective reality independent of our minds.
  • Existence of Matter, Energy, Space, and Time: These are the fundamental constituents of the physical universe as we understand it.
  • Causality: Science assumes that events have causes and that these causes can be investigated.
  • Uniformity of Natural Laws: Science assumes that the laws of nature are the same everywhere in the universe and throughout time.
  • The possibility of Observation and Measurement: Science depends on the assumption that we can observe and measure aspects of reality.
  • The biological and medical sciences observe and accept but cannot explain life and consciousness.

Science operates within a framework given by these fundamental assumptions which cannot be  explained. These incomprehensibilities are the “magic” that science builds upon. Science can address them obliquely but cannot question them directly without creating contradictions. If we were to question the existence of an external reality, for example, the entire scientific enterprise would become meaningless. Science can investigate their consequences and refine our understanding of what they are not, but cannot directly prove or disprove them. These assumptions are – at least currently – beyond human comprehension and explanation. Science builds upon this “magic” but cannot explain the “magic”.

Magic is often ridiculed because it is perceived as invoking beings with supernatural powers which in turn is taken to mean the intentional violation of some of the laws of nature. The core issue lies in the definitions of “magic” and  “supernatural.” I take supernatural to be “that which is beyond the laws of nature as we know them.” But we tend to dismiss the supernatural rather too glibly. If something is beyond comprehension it must mean that we cannot bring that event/happening to be within the laws of nature as we know them. And that must then allow the possibility of being due to the “supernatural”. If we do not know what compels existence or causality then we cannot either exclude a supernatural cause (outside the laws of nature as we know them). In fact the Big Bang theory and even quantum probabilities each need such “outside the laws of nature” elements. A black hole is supernatural. Singularities in black holes and the Big Bang represent points where our current understanding of physics breaks down. The laws of general relativity, which describe gravity, become undefined at singularities. In this sense, they are “beyond” our current laws of nature. A singularity where the laws of nature do not apply is “supernatural”. Dark energy and dark matter are essentially fudge factors and lie outside the laws of nature as we know them. We infer their existence from their gravitational effects on visible matter and the expansion of the universe, but we haven’t directly detected them. Collapsing quantum wave functions which function outside space and time are just as fantastical as Superman. All these represent holes in our understanding of the universe’s composition and dynamics. That understanding may or may not come in the future. And thus, in the now, they are supernatural.

Supernatural today may not be supernatural tomorrow. It is the old story of my technology is magic to someone else. Magic is always beyond the laws of nature as we know them. But what is magic today may remain magic tomorrow. We cannot set qualifications on what we do not know. What we do not know may or may not violate the known laws of nature. While we have a very successful theory of gravity (general relativity) that accurately predicts the motion of planets, we don’t fully understand the fundamental nature of gravity. We don’t know how it is mediated. In this sense, there is still an element of “magic” or mystery surrounding gravity. We can describe how it works, but not ultimately why. The bottom line is that we still do not know why the earth orbits the sun. We cannot guarantee that everything currently unexplained will eventually be explained by science. There might be phenomena that remain permanently beyond our comprehension, or there might be aspects of reality that are fundamentally inaccessible to scientific investigation. By definition, we cannot fully understand or categorize what we do not know. Trying to impose strict boundaries on the unknown is inherently problematic. We cannot assume that everything we currently don’t understand will necessarily conform to the laws of nature as we currently understand them. New discoveries might require us to revise or even abandon some of our current laws.

The pursuit of scientific knowledge is a journey into the unknown, and we will encounter phenomena that challenge our existing understanding. But we cannot question the foundational assumptions of science without invalidating the inquiry.

Science depends upon – and builds upon – magic.


Science needs its Gods and religion is just politics

April 11, 2021

This essay has grown from the notes of an after-dinner talk I gave last year. As I recall it was just a 20 minute talk but making sense of my old notes led to this somewhat expanded essay. The theme, however, is true to the talk. The surrounding world is one of magic and mystery. And no amount of Science can deny the magic.

Anybody’s true belief or non-belief is a personal peculiarity, an exercise of mind and unobjectionable. I do not believe that true beliefs can be imposed from without. Imposition requires some level of coercion and what is produced can never be true belief. My disbelief can never disprove somebody else’s belief.

Disbelieving a belief brings us to zero – a null state. Disbelieving a belief (which by definition is the acceptance of a proposition which cannot be proved or disproved) brings us back to the null state of having no belief. It does not prove the negation of a belief.

[ (+G) – (+G) = 0, not (~G) ]

Of course Pooh puts it much better.


Science needs its Gods and religion is just politics


Without first having religions, atheism and agnosticism cannot exist

June 27, 2017

I take science to be the process by which areas of ignorance are explored, illuminated and then shifted into our space of knowledge. One can believe that the scientific method is powerful enough to answer all questions – eventually – by the use of our cognitive abilities. But it is nonsense to believe that science is, in itself, the answer to all questions. As the perimeter surrounding human knowledge increases, what we know that we don’t know, also increases. There is what we know and at the perimeter of what we know, lies what we don’t know. Beyond that lies the boundless space of ignorance where we don’t know what we don’t know.

Religions generally use a belief in the concept of a god (or gods) as their central tenet. By definition this is within the space of ignorance (which is where all belief lives). For some individuals the belief may be so strong that they claim it to be “personal knowledge” rather than a belief. It remains a belief though, since it cannot be proven. Buddhism takes a belief in gods to be unnecessary but – also within the space of ignorance – believes in rebirth (not reincarnation) and the “infinite” (nirvana). Atheism is just as much in the space of ignorance since it is based on the beliefs that no gods or deities or the supernatural do exist. Such beliefs can only come into being as a reaction to others having a belief in gods or deities or the supernatural. But denial of a non-belief cannot rationally be meaningful. If religions and their belief in gods or the supernatural did not first exist, atheism would be meaningless. Atheism merely replaces a belief in a God to a belief in a Not-God.

I take the blind worship of “science” also to be a religion in the space of ignorance. All physicists and cosmologists who believe in the Big Bang singularity, effectively believe in an incomprehensible and unexplainable Creation Event. Physicists who believe in dark matter or dark energy, as mysterious things, vested with just the right properties to bring their theories into compliance with observations of an apparently expanding universe, are effectively invoking magic. When modern physics claims that there are 57 fundamental particles but has no explanation as to why there should be just 57 (for now) or 59 or 107 fundamental particles, they take recourse to magical events at the beginning of time. Why there should be four fundamental forces in our universe (magnetism, gravitation, strong force and weak force), and not two or three or seven is also unknown and magical.

Agnosticism is just a reaction to the belief in gods. Whereas atheists deny the belief, agnostics merely state that such beliefs can neither be proved or disproved; that the existence of gods or the supernatural is unknowable. But by recognising limits to what humans can know, agnosticism inherently accepts that understanding the universe lies on a “higher” dimension than what human intelligence and cognitive abilities can cope with. That is tantamount to a belief in “magic” where “magic” covers all things that happen or exist but which we cannot explain. Where atheism denies the answers of others, agnosticism declines to address the questions.

The Big Bang singularity, God(s), Nirvana and the names of all the various deities are all merely labels for things we don’t know in the space of what we don’t know, that we don’t know. They are all labels for different kinds of magic.

I am not sure where that leaves me. I follow no religion. I believe in the scientific method as a process but find the “religion of science” too self-righteous and too glib about its own beliefs in the space of ignorance. I find atheism is mentally lazy and too negative. It is just a denial of the beliefs of others. It does not itself address the unanswerable questions. It merely tears down the unsatisfactory answers of others. Agnosticism is a cop-out. It satisfies itself by saying the questions are too hard for us to ever answer and it is not worthwhile to try.

I suppose I just believe in Magic – but that too is just a label in the space of ignorance.


 

Science needs some scienticians

June 18, 2014

Physic gave rise to physicians long before physics was practiced by a physicist,

Mathematics gives mathematicians, but who would trust a mathematist. 

A practitioner of an “ology” has an honourable profession,

So biologistsoncologists, archaeologists and geologists can be numbered by the million. 

Without the richness of an “ist” modern politics would be barren,

politicist has a murky trade but he is not a politician

We have leftists and rightists and socialists and you can even find some libertarians,

But for all the mayhem in the world, you will not find any extremians.

Environmentalists and conservationists are politically very fashionable,

But their devious methods have now become – rather questionable. 

Philosophy was where it started but we rarely refer to philosophists,

And many of the scientists of today are little more than sophists. 

It was only in 1840 that scientists were one of Whewell’s inventions,

But they are now two-a-penny, and we could do with a few scienticians.

It should be quite clear that I think that there are far too many who claim to be scientists though they do no science. It then becomes useful to distinguish the real scienticians from the rabble. And perhaps the same could apply to the real economians among the multitude of clerks who call themselves economists.

Number of citations and excellence in science

February 10, 2014

Scientific excellence can only truly be judged by history. But history has eyes only for impact and if excellent science causes no great change to science orthodoxy, it is soon forgotten. For a scientist the judgements of history long after he performs his science are of no real significance. Even where academic freedom is the main motivator for the scientist,  the degrees of freedom available are related to academic success. An academic or scientific career depends increasingly on contemporaneus judgements – and here social networking, peer review and bibliometric factors are decisive. There may well be some correlation between academic success and the “goodness” of the scientist but it is not the success or the bibliometrics which are causative.

As Lars Walloe puts it: Walloe-on-Exellence

In the evaluation process many scientists and nearly all university and research council administrators love all kind of bibliometric tools. This has of course a simple explanation. The “bureaucracy” likes to have a simple quantitative tool, which can be used with the aid of a computer and the internet to give an “objective” measure of excellence. However, excellence is not directly related either to the impact factor of the journal in which the work is published, or to the number of citations, or to the number of papers published, or even to some other more sophisticated bibliometric indices. Of course there is some correlation, but it is in my judgement weaker than what many would like to believe, and uncritical use of these tools easily leads to wrong conclusions. For instance the impact factor of a journal is mainly determined by the very best papers published in it and not so much by the many ordinary papers published. We know well that even in high impact factor journals like Science and Nature or high impact journals in more specialized fields, from time to time not so excellent papers are being published. 

…..  I often meet scientists for whom to obtain high bibliometric factors serve as a prime guidance in their work. Too many of them are really not that good, but were just lucky or work in a field where it was easier to get many citations. …..If you are working with established methods in a popular field you can be fairly sure to get your papers published. I can mention in details some medical fields were I know that this has happened or is happening today. The scientists in such fields get a high number of publications and citations, but the research is not necessarily excellent. 

And getting your paper published has now become so important in the advancement of an academic career that journals are proliferating. Many of the new journals have now shifted their business models to be based on author’s fees and not on volume of readership. This is a very “safe” business model since profits are ensured before the journal has even been published and if the journal is an on-line journal then costs are minimal. It is virtually the “self-publishing” of papers. You pay your money and get your paper published.

The reality today is that more papers are being published by more authors in more journals than ever before. But fewer are being actually read. Papers are cited without having been read – let alone understood.

Skeptical Scalpel:

Another reason could be that publishers, particularly those who charge authors fees for publishing, are in the business of making money.

Authoring journal articles is not only enhancing to one’s CV (the old “publish or perish” cliché), it is required by Residency Review Committees as evidence of “scholarly activity” in training programs. Maybe it’s good for attracting referrals too.

The publish or perish ethos has led to a proliferation of the number of authors per paper!

First noted in 1993 by a paper in Acta Radiologica and a letter in the BMJ, the number of authors per paper has risen dramatically over the years. 
study of 12 radiology journals found the number of authors per paper doubled from 2.2 in 1966 to 4.4 in 1991.  A review of Neurosurgery and the Journal of Neurosurgery spanned 50 years. the average went from 1.8 authors per article in 1945 to 4.6 authors in 1995. 
Of note, the above two articles were each written by a single author. 
Three psychiatrists from Dartmouth analyzed original scientific articles in four of the most prestigious journals in the United States—Archives of Internal Medicine, Annals of Internal Medicine, Journal of the American Medical Association, and the New England Journal of Medicine—from 1980 to 2000. They found that the mean number of authors per paper increased from 4.5 to 6.9. The same is true for two plastic surgery journals, which saw the average number of authors go from 1.4 to 4.0 and 1.7 to 4.2 in the 50 years from 1955 to 2005. The number of single-author papers went from 78% to 3% in one journal and 51% to 8% another.
In orthopedics, a 
review of the American and British versions of the Journal of Bone and Joint Surgery for 60 years from 1949 to 2009 showed an increase of authors per paper from 1.6 to 5.1.
An impressive  rise in the number of authors took place in two leading thoracic surgery
 journals. For the Journal of Thoracic and Cardiovascular Surgery the increase was 1.4  in 1936 to 7.5 2006 and for Annals of Thoracic Surgery it was 3.1 in 1966 to 6.8 in 2006. 

And the winner is a paper with 3171 authors! Needles to say it comes from Big Science and the Large Hadron Collider:

the paper with the most authors is “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC” in a journal called “Physics Letters B” with 3171. The list of authors takes up 9 full pages.

Too many journals, too many papers, too many authors and too many citations. But that does not mean there is more excellence in science.

Numeracy and language

December 2, 2013

I tend towards considering mathematics a language rather than a science. In fact mathematics is more like a family of languages each with a rigorous grammar. I like this quote:

R. L. E. SchwarzenbergerThe Language of Geometry, in A Mathematical Spectrum Miscellany, Applied Probability Trust, 2000, p. 112:

My own attitude, which I share with many of my colleagues, is simply that mathematics is a language. Like English, or Latin, or Chinese, there are certain concepts for which mathematics is particularly well suited: it would be as foolish to attempt to write a love poem in the language of mathematics as to prove the Fundamental Theorem of Algebra using the English language.

Just as conventional languages enable culture and provide a tool for social communication, the various languages of mathematics, I think, enable science and provide a tool for scientific discourse. I take “science” here to be analaogous to a “culture”. To follow that thought then, just as science is embedded within a “larger” culture, so is mathematics embedded within conventional languages. This embedding shows up as the ability of a language to deal with numeracy and numerical concepts.

And that means then the value judgement of what is “primitive” when applied to language can depend upon the extent to which mathematics and therefore numeracy is embedded within that language.

GeoCurrents examines numeracy embedded within languages:

According to a recent article by Mike Vuolo in Slate.com, Pirahã is among “only a few documented cases” of languages that almost completely lack of numbers. Dan Everett, a renowned expert in the Pirahã language, further claims that the lack of numeracy is just one of many linguistic deficiencies of this language, which he relates to gaps in the Pirahã culture. ….. 

The various types of number systems are considered in the WALS.info article on Numeral Bases, written by Bernard Comrie. Of the 196 languages in the sample, 88% can handle an infinite set of numerals. To do so, languages use some arithmetic base to construct numeral expressions. According to Comrie, “we live in a decimal world”: two thirds of the world’s languages use base 10 and such languages are spoken “in nearly every part of the world”. English, Russian, and Mandarin are three examples of such languages. ….. 

Around 20% of the world’s languages use either purely vigesimal (or base 20) or a hybrid vigesimal-decimal system. In a purely vigesimal system, the base is consistently 20, yielding the general formula for constructing numerals as x20 + y. For example, in Diola-Fogny, a Niger-Congo language spoken in Senegal, 51 is expressed as bukan ku-gaba di uɲɛn di b-əkɔn ‘two twenties and eleven’. Other languages with a purely vigesimal system include Arawak spoken in Suriname, Chukchi spoken in the Russian Far East, Yimas in Papua New Guinea, and Tamang in Nepal. In a hybrid vigesimal-decimal system, numbers up to 99 use base 20, but the system then shifts to being decimal for the expression of the hundreds, so that one ends up with expressions of the type x100 + y20 + z. A good example of such a system is Basque, where 256 is expressed as berr-eun eta berr-ogei-ta-hama-sei ‘two hundred and two-twenty-and-ten-six’. Other hybrid vigesimal-decimal systems are found in Abkhaz in the Caucasus, Burushaski in northern Pakistan, Fulfulde in West Africa, Jakaltek in Guatemala, and Greenlandic. In a few mostly decimal languages, moreover, a small proportion of the overall numerical system is vigesimal. In French, for example, numerals in the range 80-99 have a vigesimal structure: 97 is thus expressed as quatre-vingt-dix-sept ‘four-twenty-ten-seven’. Only five languages in the WALS sample use a base that is neither 10 nor 20. For instance, Ekari, a Trans-New Guinean language spoken in Indonesian Papua uses base of 60, as did the ancient Near Eastern language Sumerian, which has bequeathed to us our system of counting seconds and minutes. Besides Ekari, non-10-non-20-base languages include Embera Chami in Colombia, Ngiti in Democratic Republic of Congo, Supyire in Mali, and Tommo So in Mali. …… 

Going back to the various types of counting, some languages use a restricted system that does not effectively go above around 20, and some languages are even more limited, as is the case in Pirahã. The WALS sample contains 20 such languages, all but one of which are spoken in either Australia, highland New Guinea, or Amazonia. The one such language found outside these areas is !Xóõ, a Khoisan language spoken in Botswana. ……. 

Read the whole article. 

Counting monkey?

In some societies in the ancient past, numeracy did not contribute significantly to survival as probably with isolated tribes like the Pirahã. But in most human societies, numeracy was of significant benefit especially for cooperation between different bands of humans. I suspect that it was the need for social cooperation which fed the need for communication within a tribe and among tribes, which in turn was the spur to the development of language, perhaps over 100,000 years ago. What instigated the need to count is in the realm of speculation. The need for a calendar would only have developed with the development of agriculture. But the need for counting herds probably came earlier in a semi-nomadic phase. Even earlier than that would have come the need to trade with other hunter gatherer groups and that  probably gave rise to counting 50,000 years ago or even earlier. The tribes who learned to trade and developed the ability and concepts of trading were probably the tribes that had the best prospects of surviving while moving from one territory to another. It could be that the ability to trade was an indicator of how far a group could move.

And so I am inclined to think that numeracy in language became a critical factor which 30,000 to 50,000 years ago determined the groups which survived and prospered. It may well be that it is these tribes which developed numbers, and learned to count, and learned to trade that eventually populated most of the globe. It may be a little far-fetched but not impossible that numeracy in language may have been one of the features distinguishing Anatomically Modern Humans from Neanderthals. Even though the Neanderthals had larger brains and that we are all Neanderthals to some extent!

Science is losing its ability to self-correct

October 20, 2013

With the explosion in the number of researchers, the increasing rush to publication and the corresponding explosion in traditional and on-line journals as avenues of publication, The Economist carries an interesting article making the point that the assumption that science is self-correcting is under extreme pressure. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.”

The field of psychology and especially social psychology has been much in the news with the dangers of “priming”.

“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.

Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.

It is not just “soft” fields which have problems. It is apparent that in medicine a large number of published results cannot be replicated

… irreproducibility is much more widespread. A few years ago scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. According to a piece they wrote last year in Nature, a leading scientific journal, they were able to reproduce the original results in just six. Months earlier Florian Prinz and his colleagues at Bayer HealthCare, a German pharmaceutical giant, reported in Nature Reviews Drug Discovery, a sister journal, that they had successfully reproduced the published results in just a quarter of 67 seminal studies.

The governments of the OECD, a club of mostly rich countries, spent $59 billion on biomedical research in 2012, nearly double the figure in 2000. One of the justifications for this is that basic-science results provided by governments form the basis for private drug-development work. If companies cannot rely on academic research, that reasoning breaks down. When an official at America’s National Institutes of Health (NIH) reckons, despairingly, that researchers would find it hard to reproduce at least three-quarters of all published biomedical findings, the public part of the process seems to have failed.

It is not just that research results cannot be replicated. So much of what is published is just plain wrong and the belief that science is self-correcting is itself under pressure

Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think. …… Statistical mistakes are widespread. The peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate. Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.” 

…… In 2005 John Ioannidis, an epidemiologist from Stanford University, caused a stir with a paper showing why, as a matter of statistical logic, the idea that only one such paper in 20 gives a false-positive result was hugely optimistic. Instead, he argued, “most published research findings are probably false.” 

The tendency to only publish positive results leads also to statistics being skewed to allow results to be shown as being poitive

The negative results are much more trustworthy; …….. But researchers and the journals in which they publish are not very interested in negative results. They prefer to accentuate the positive, and thus the error-prone. Negative results account for just 10-30% of published scientific literature, depending on the discipline. This bias may be growing. A study of 4,600 papers from across the sciences conducted by Daniele Fanelli of the University of Edinburgh found that the proportion of negative results dropped from 30% to 14% between 1990 and 2007. Lesley Yellowlees, president of Britain’s Royal Society of Chemistry, has published more than 100 papers. She remembers only one that reported a negative result.

…. Other data-heavy disciplines face similar challenges. Models which can be “tuned” in many different ways give researchers more scope to perceive a pattern where none exists. According to some estimates, three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”

The idea of peer-review being some kind of a quality check of the results being published is grossly optimistic

The idea that there are a lot of uncorrected flaws in published studies may seem hard to square with the fact that almost all of them will have been through peer-review. This sort of scrutiny by disinterested experts—acting out of a sense of professional obligation, rather than for pay—is often said to make the scientific literature particularly reliable. In practice it is poor at detecting many types of error.

John Bohannon, a biologist at Harvard, recently submitted a pseudonymous paper on the effects of a chemical derived from lichen on cancer cells to 304 journals describing themselves as using peer review. An unusual move; but it was an unusual paper, concocted wholesale and stuffed with clangers in study design, analysis and interpretation of results. Receiving this dog’s dinner from a fictitious researcher at a made up university, 157 of the journals accepted it for publication. ….

……. As well as not spotting things they ought to spot, there is a lot that peer reviewers do not even try to check. They do not typically re-analyse the data presented from scratch, contenting themselves with a sense that the authors’ analysis is properly conceived. And they cannot be expected to spot deliberate falsifications if they are carried out with a modicum of subtlety.

Fraud is very likely second to incompetence in generating erroneous results, though it is hard to tell for certain. 

And then there is the issue that all results from Big Science can never be replicated because the cost of the initial work is so high. Medical research or clinical trials are also extremely expensive. Journals have no great interest to publish replications (even when they are negative). And then, to compound the issue, those who provide funding are less likely to extend funding merely for replication or for negative results.

People who pay for science, though, do not seem seized by a desire for improvement in this area. Helga Nowotny, president of the European Research Council, says proposals for replication studies “in all likelihood would be turned down” because of the agency’s focus on pioneering work. James Ulvestad, who heads the division of astronomical sciences at America’s National Science Foundation, says the independent “merit panels” that make grant decisions “tend not to put research that seeks to reproduce previous results at or near the top of their priority lists”. Douglas Kell of Research Councils UK, which oversees Britain’s publicly funded research argues that current procedures do at least tackle the problem of bias towards positive results: “If you do the experiment and find nothing, the grant will nonetheless be judged more highly if you publish.” 

Trouble at the lab 

The rubbish will only decline when there is a cost to publishing shoddy work which outweighs the gains of adding to a researcher’s list of publications. At some point researchers will need to be held liable and accountable for their products (their publications). Not just for fraud or misconduct but even for negligence or gross negligence when they do not carry out their work using the best available practices of the field. These are standards that some (but not all) professionals are held to and there should be no academic researcher who is not also subject to such a standard. If peer-review is to recover some of its lost credibility then anonymous reviews must disappear and reviewers must be much more explicit about what they have checked and what they have not.