Archive for the ‘Philosophy’ Category

Gods are a matter of epistemology rather than theology

December 28, 2025

Gods are a matter of epistemology rather than theology 

or Why the boundaries of cognition need the invention of Gods

An essay on a subject which I have addressed many times with my views evolving and getting more nuanced over the years but generally converging over time. I suspect this is now as close to any final convergence I can achieve.


Summary

Human cognition is finite, bounded by sensory and conceptual limitations. When we attempt to comprehend realities that exceed those limits—such as the origin of existence, the nature of infinity, or the essence of consciousness—we inevitably reach a point of cognitive failure. At this boundary, we substitute understanding with “labels” that preserve the appearance of explanation. “God” is one such label, a placeholder for what cannot be conceived or described.

The essay argues that the invention of gods is not primarily a cultural accident or a moral device but a “cognitive necessity”. Any consciousness that seeks to understand its total environment will eventually collide with incomprehensibility. To sustain coherence, the mind must assign meaning to the unknowable—whether through myth, metaphysics, or scientific abstraction. “God” thus emerges as a symbolic bridge over the gap between the knowable and the unknowable.

This tendency manifests in the “discretia/continua” tension which arises from our inability to reconcile the world as composed of both distinct things (particles, identities, numbers) and continuous processes (waves, emotions, time). Different cognitions, human, alien, or animal, would experience different boundaries of comprehension depending on their perceptual structures. Yet each would face some ultimate limit, beyond which only placeholders remain.

The essay further proposes that “God” represents not an active being but the “hypothetical cognition that could perceive the universe in its totality”. For finite minds, such total perception is impossible. Thus, the divine concept is born as a projection of impossible completeness. Even an unconscious entity, such as a rock, is immersed in the continuum but lacks perception, suggesting that only through perception do concepts like “continuity” and “divinity” arise.

In essence, “gods exist because minds are finite”. They are conceptual necessities marking the horizon of understanding. The invention of gods is not weakness but the natural consequence of finite awareness confronting the infinite. Where the finitude of our cognition meets the boundless universe, we raise placeholders—and call them gods. “God” emerges not from revelation, but from the structure and limits of cognition itself.


Human finitude

Human cognition is finite. Our brains are finite, and we do not even have many of the senses that have evolved among other living species on earth. We rely primarily on the five traditional senses (sight, hearing, smell, taste, and touch), plus some others like balance, pain, and body awareness. But living things on earth have evolved many “extra” senses that we do not possess. Unlike other creatures we cannot directly detect magnetic fields, electrical fields, or infrared or ultraviolet radiation. We cannot either detect and use echo location, or polarized light or seismic signals as some other animals can. (See  Senses we lack). And for all those other detectable signals that must exist in the universe, but are unknown on earth, we cannot know what we do not have.  

I take the cognition of any individual to emerge from the particular combination of brain, senses and body making up that individual where the three elements have been tuned to function together by evolution. It is through the cognition available that any observer perceives the surrounding universe. And so it is for humans who find their surroundings to be without bound. No matter where or when we look, we see no edges, no boundaries, no beginnings and no endings. In fact, we can perceive no boundaries of any kind in any part of the space and time (and the spacetime) we perceive ourselves to be embedded in. Our finitude is confronted by boundless surroundings and it follows that each and every observation we make is necessarily partial, imperfect and incomplete. It is inevitable that there are things we cannot know. It is unavoidable that what we do know can only be partial and incomplete. All our observations, our perceptions are subject to the blinkers of our cognition and our finitude can never encompass the totality of the boundless.

It is this finitude of our cognition and the boundless world around us which gives us our three-fold classification of knowledge. There is that which we know, there is that which is knowable but which we do not know, and then there is that which we cannot know. Every act of knowing presupposes both a knower and what is or can be known. Omniscience, knowing everything, is beyond the comprehension of human cognition. To know everything is to remove the very meaning of knowledge. There would be nothing to be known. It is a paradox that as knowledge grows so does the extent of the interface to the unknown and some of that is unknowable. Any mind contained within the universe is a finite mind. Any finite mind faced with a boundless universe is necessarily curtailed in the extent of its perception, processing, representation and understanding.

A key feature of human cognition is that we have the ability to distinguish “things” – things which are discrete, unique, identifiable and countable. We distinguish fundamentally between continua on the one hand, and discrete separate “things” on the other. We classify  air, water, emotions, colours as continua, while we recognize atoms and fruit and living entities and planets and galaxies and even thoughts as “things”. Once a thing exists it has an identity separate from every other thing. It may be part of another thing but yet retains its own identity as long as it remains a thing. To be a thing is to have a unique identity in the human perceived universe. We even dare to talk about all the things in the visible universe (as being the ca. 1080 atoms which exist independently and uniquely). But the same cognitive capability also enjoins us to keep “things” separated from continua. We distinguish, draw boundaries, try to set one thing against another as we seek to define them. Perception itself is an act of discretization within a world we perceive as continuous in space, energy, time, and motion. Where there are flows without clear division, the human mind seeks to impose structure upon that flow, carving reality into things it can identify, name, and manipulate. Without that discretization there could be no comprehension, but because of it, comprehension is always incomplete. As with any enabler (or tool), human cognition both enables inquiry but also limits the field of inquiry. Even when our instruments detect parameters we cannot directly sense (uv, ir, infrasound, etc.) the data must be translated into forms that we can detect (audible sound, visible light, …) so that our brains can deal with data in the allowable forms for interpretation. But humans can never reproduce what a dog experiences with its nose and processed by its brain. Even the same signals sensed by different species are interpreted differently by their separate brains and the experiences cannot be shared.

When finitude meets the boundless, ….

It is not surprising then that the finitude of our understanding is regularly confounded when confronted by one of the many incomprehensibilities of our boundless surroundings. All our metaphysical mysteries originate at these confrontations. At the deepest level, this is inevitable because cognition itself is finite and cannot encompass an unbounded totality. There will always exist unknowable aspects of existence that remain beyond our cognitive horizon. These are not gaps to be filled by further research or better instruments. They are structural boundaries. A finite observer cannot observe the totality it is part of, for to do so it would have to stand outside itself. The limitation is built into the architecture of our thought. Even an omniscient computer would fail if it tried to compute its own complete state. A system cannot wholly contain its own description. So it is with consciousness. The human mind, trying to know all things, ultimately encounters its own limits, of comprehension.

When that point is reached where finitude is confronted by boundlessness, thought divides. One path declares the unknown to be empty and that beyond the horizon there is simply nothing to know. Another declares that beyond the horizon lies the infinite, the absolute. Both stances are responses to the same impasse, and both are constrained by the same cognitive structure. Both are not so much wrong as of providing no additional insight, no extra value. For something we do not know we cannot even imagine if there is a fence surrounding it. Each acknowledges, by affirmation or negation, that there exists a boundary beyond which the mind cannot pass. It is this boundary which limits and shapes our observations (or to be more precise, our perception of our observations).

The human mind perceives “things.” Our logic, our language, and our mathematics depend upon the ability to isolate and identify “things”. An intelligence lacking this faculty could not recognize objects, numbers, or individuality. It would perceive not a world of things, but a perception of a continuum with variations of flux, or as patterns without division. For such a cognition, mathematics would be meaningless, for there would be nothing to count. Reality would appear as a continuum without edges. That difference reveals that mathematics, logic, and even identity are not universal properties of the cosmos but features of the cognitive apparatus that apprehends it. They exist only within cognition. The laws of number and form are not inscribed in the universe; they are inscribed in the way our minds carve the universe into parts. A spider surely senses heat and warmth and light as gradients and density, but it almost certainly has no conception of things like planets and stars.

We find that we are unable to resolve the conflicts which often emerge between the discrete and the continuous, between the countable and the uncountable. This tension underlies all human thought. It is visible in every field we pursue. It appears in particles versus waves, digital versus analogue, fundamental particles versus quantum wave functions, reason versus emotion, discrete things within the spacetime continuum they belong to. It appears in the discrete spark of life as opposed to amorphous, inert matter or as individual consciousnesses contributing to the unending stream of life. It appears even in mathematics as the tension between countable and uncountable, number and continuum. Continua versus “discretia” (to coin a word) is a hallmark of human cognition. This tension or opposition is not a flaw in our understanding; it is the foundation of it. The mind can grasp only what it can distinguish, but all of existence exceeds what can be distinguished.

Where discreteness crashes into continuity, human cognition is unable, and fails, to reconcile the two. The paradox is irreducible. To the senses, the ocean is a continuous expanse, while to the physicist, it resolves into discrete molecules, atoms and quantum states. Both views are correct within their frames, yet neither captures the whole. The experiences of love, pain, or awe are likewise continuous. They cannot be counted or divided or broken down to neural signals without destroying their essence. Consciousness oscillates perpetually between the two modes – either breaking the continuous into parts but then seeking a unifying continuity among the parts. The unresolved tension drives all inquiry, all art, all metaphysics. And wherever the tension reaches its limit, the mind needs a placeholder, a label to mark the place of cognitive discontinuity.  The universe appears unbounded to us, yet we cannot know whether it is infinite or finite. If infinite, the very concept of infinity is only a token for incomprehensibility. If finite, then what lies beyond its bounds is equally beyond our grasp. Either way, the mind meets different facets of the same wall. The horizon of incomprehensibility is shaped by the nature of the cognition that perceives it. A spider meets the limit of its sensory world at one point, a human at another, a hypothetical superintelligence elsewhere. But all must meet it somewhere. For any finite mind, there will always be a place where explanation runs out and symbol begins. These places, where the boundary of comprehension is reached, is where the placeholder-gods are born. “God” is the label – a signpost – we use for the point at which the mind’s discretizing faculty fails.

…… the interface to incomprehension needs a label

The word “God” has always carried great pondus but carries no great precision of meaning. For millennia, it has served as the answer of last resort, the terminus at the end of every chain of “why?” Whenever a question could no longer be pursued, when explanations ran out of anywhere to go, “God” was the placeholder for the incomprehensible. The impulse was not, in the first hand, religious. The need for a marker, for a placeholder to demarcate the incomprehensible, was cognitive. What lies at the root of the use of the word “God” is not faith or doctrine, but the structure of thought itself. The concept arises wherever a finite mind confronts what it cannot encompass. The invention of a placeholder-God, therefore, is not a superstition of primitive people but a structural necessity when a bounded cognition meets unbounded surroundings. It is what minds must do when they meet their own limits. When faced with incomprehensibility, we need to give it a label. “God” will do as well as any other.

Each time the boundary of knowledge moves, the placeholder moves with it. The domain of gods recedes in a landscape which has no bounds. It never vanishes, for new boundaries of incomprehension always arise. As the circle of knowledge expands the boundary separating the known from the unknowable expands as well. Just think of an expanding circle. As the circle of knowledge grows the perimeter to the unknowable also expands. Beyond the line of separation lies a domain that thought can point to but not penetrate.

The mind must first collide with what it cannot grasp. Only then does the placeholder-God emerge as the marker of our cognitive boundary. This is not a deliberate act of imagination but a reflex of cognition itself. The finite mind, unable to leave an unknown unmarked, seals it with a symbol. The placeholder-God is that seal  – not a being, but a boundary. It does not describe reality but it provides a place for thought to rest where explanation collapses. As a placeholder, “God” is just a 3-letter label. The interface with the incomprehensible, and the placeholder it produces, are therefore necessary, but not sufficient, conditions for any God-being to appear in human thought. Without the interface, divinity has no function; a God invented without an underlying mystery would be a mere fantasy, not a sacred concept.

The paradox deepens when one asks what kind of cognition would not require such a placeholder. Only a mind that could know everything without limit would need none –  but such a mind would no longer be finite, and thus no longer a mind in any meaningful sense. To know all is to dissolve the distinction between knower and known. The infinite mind would not think “of” God; it would be what the finite mind calls God, though without the need to name it. Hence, only finite minds invent gods, and they must necessarily do so. The invention is the shadow cast by limitation.

The concept of God, then, is not evidence of divine existence but arises as a consequence of cognitive limitation. It is the sign that the mind has reached the edge of its own design. To invent gods is not a failure of reason but its completion. The placeholder is the punctuation mark at the end of understanding. It acknowledges that thought, to exist at all, must have limits. And within those limits, the impulse to name what cannot be named is inescapable.

The earliest people looked at the sky and asked what moved the sun. The answer “a God” was no explanation but it marked a boundary. It was a placeholder for the inexplicable. The label has changed. It was once Zeus, later Nature, now perhaps the Laws of Physics or even Science, but the function remains the same. Existence, time, causality, matter and energy are still fundamental assumptions in modern science and are all still inexplicabilities needing their placeholder-Gods. Let us not forget that terms assumed ro be very well-known, such as gravity and electric charge, even today are merely placeholder-Gods. We may be able to calculate the effects of gravity to the umpteenth decimal, but we still do not know why gravity is. Electrical charge just is, but why it is, is still just a brute fact in science. Every so-called brute fact invoked by science or philosophy is nothing other than a placeholder-God. Where comprehension ends, a placeholder is needed to prevent thought from collapsing into chaotic incomprehensibility. The idea of a placeholder-God, therefore, is not a primitive explanation but an intellectual necessity. It is the symbol that marks the limits of the cognitive map.

From cognitive placeholder to God-beings

(Note on my use of language. I take supernatural to mean supra-natural – beyond known natural laws – but not unreal. While the unnatural can never be observed, the supernatural is always what has been observed, and is therefore real, but is inexplicable. The rise of the sun and the waning of the moon and the onset of storms and the seasonal growth of plants, all were once considered inexplicable and supernatural. As human knowledge grew, each was gradually absorbed within the gamut of human comprehension. The supernatural is therefore not a denial of reality but a recognition of the incompletely understood. The unnatural is what I take to be unreal and fantastical or invented. The unnatural may be the stuff of fairytales and fantasy but being unreal, can never be observed).

As the placeholder-God gains social form, it must somehow rise above the human condition to retain meaning. A God limited to human capabilities would fail to explain what lies beyond it. Thus, gods become supra-human, but not unnatural, for they remain within the world but “beyond what humans can.”

Under the pressures of imagination, fear, and the need for coherence, the placeholder-God then acquires agency. The divine is invoked. The unknown becomes someone rather than something. A God-being, however, cannot be invented except from first having a placeholder-God. It cannot be created or invented directly, ex nihilo, because invention presupposes a motive, and without the confrontation with incomprehensibility, there is none. The human mind can understand the exercise of power only through will and intent and so the boundary acquires intention. In time, societies institutionalize these projections, turning the abstract placeholder into a God-being  and endowing it with purpose, emotion, and supra-human capacity.

This perspective gives the divine a new and paradoxical definition: “God is that which would perceive the entire universe without limit”. Such perception would not act, judge, or intervene. It would simply encompass. Yet a cognition capable of perceiving all would have no distinction within itself. It would no longer know as we know, for knowledge depends upon differentiation. To perceive all would be to dissolve all boundaries, including the boundary between subject and object. Such a consciousness would be indistinguishable from non-consciousness. The rock that perceives nothing and the god that perceives everything would converge, each beyond cognition, each outside the tension that defines life. Consciousness, poised between them, exists precisely because it knows but does not (cannot) know all.

The necessity of the divine placeholder follows directly from human finitude. The mind cannot tolerate infinite regress or complete ambiguity. It demands closure, even when closure is impossible. To preserve coherence, it must mark the point where coherence breaks down. That mark is the god-concept. It halts the chain of “why” with the only possible answer that does not generate another question. “Because God made it so” and “because that is how the universe is” perform the same function. They end the regress. In this sense, the invention of gods is an act of intellectual hygiene. Without a terminal symbol, thought would never rest; it would dissolve into endless questioning.

Understanding the god-concept in this way does not demean it. It restores its dignity by grounding it in the architecture of cognition rather than in superstition. Theology, stripped of dogma, becomes the study of where understanding fails and symbol takes over  –  a form of cognitive cartography. Each theology is a map of incomprehensibility, tracing the outer borders of thought. Their differences lie in what each places at the edge of their maps and the projections and colours each uses. Yahveh or Indra, Heaven or Hell, Big Bangs and Black Holes, and Nirvana or Nothingness, but their commonality lies in the inevitability of the edge itself.

Modern science has not abolished this pattern; it has merely changed the symbols. The physicist’s equations reach their limit at the singularity, the cosmologist’s model ends before the Big Bang, the biologist’s postulates begin after the spark of life and the neuroscientist’s theory marvel at the mystery of consciousness. Each field encounters an ultimate opacity and introduces a term  –  “quantum fluctuation,” “initial condition,” “emergence”, “random event”  –  that serves the same function the placeholder-God once did. Quantum mechanics has shifted the position of many placeholders but has replaced them with new boundaries to the inexplicable. New concepts such as fields and quantum waves and collapse of these are all new “brute facts”. As labels they provide no explanations since they cannot. They are “brute facts”, declarations that comprehension goes no further, that explanation stops here. Matter, energy, spacetime, and causality remain today’s deepest placeholders and there is no explanation in any field of science which can be made without presupposing them. The structure of thought remains the same even when the vocabulary has changed.

In this sense, the divine arises not from invention but from collision. There must first be an encounter with incomprehensibility  – the interface  – before any god-being can appear. Without such a frontier, divinity has no function. A god invented without an underlying mystery would be a mere fiction, not a sacred idea, because it would answer no cognitive or existential demand.

Thus the sequence when finitude is confronted by boundlessness is inevitable and unidirectional:

incomprehensibility → cognitive discomfort → placeholder → personification → divinity.

The Atheist–Theist Misunderstanding

When gods are understood not as beings but as boundaries of cognition, the quarrel between theist and atheist becomes a shadow-boxing match. Both speak to the same human need  – to name the edges of what we cannot (or cannot yet) know.

The theist affirms that beyond the boundary lies sacred divinity while the atheist denies the personality that has been projected upon that region. Yet both acknowledge, implicitly or explicitly, that the boundary exists. The theist says, “Here is God.” The atheist says, “Here is mystery, but not God.” Each uses a different language to describe the same encounter with incomprehensibility. In that sense, the death of God is only the death of one language of ignorance, soon replaced by another. Every age renames its mysteries. Where one century says “God,” another says “Nature,” or “Chance,” or “Quantum Field.” The placeholders persist and only their symbols change. The Laws of Nature are descriptions of observed patterns but explain nothing and do not contain, within themselves, any explanation as to why they are. All our observations assume causality to give us patterns we call Laws. When patterns are not discernible we invoke random events (which need no cause) or we impose probabilistic events on an unknowing universe.

Theism and atheism, then, are not opposites but reactions to the same human predicament, the finite mind meeting the incomprehensible. One bows before it; the other pretends to measure it. Both, in their own ways, testify to the same condition  – that we live surrounded by the unknowable. If there is a lesson in this, it is not theological but epistemological. Gods are not proofs or explanations of existence. They are confessions of cognitive limitation. They mark the frontier between what can be known and what cannot, yet or ever, be known. To understand them as such is not to destroy them but to restore them to their original role  as signposts for, not explanations of, the boundaries of thought.

Our cognition may evolve but will remain finite for the length of our time in this universe. So long as it remains finite, there will always be gods. Their names will change, their forms will evolve, but their necessity will endure. They must endure for they arise wherever understanding ends and wonder begins.


The Skeptical Case against the UN Declaration of Human Rights / 3

August 5, 2025

“The Skeptical Case against the UN Declaration of Human Rights / 3” follows on from my previous essays:

The Skeptical Case Against Natural Law / 1

The Fallacy of Universalism / 2


Background

The United Nations Declaration of Human Rights (UDHR) was adopted in 1948. Since then the number of instances of man’s inhumanity to man has increased by more than a factor of 3 and at greater than the rate of population growth  (2.5 billion in 1948 to c. 8 billion today). The Declaration has neither reduced suffering nor improved human behaviour. In fact, it has not even addressed human behaviour let alone human conflict. Data from the Office of the High Commissioner for Human Rights (OHCHR) shows that violations of international humanitarian and human rights law have risen in absolute terms, outpacing global population growth. and regional instability. 


Introduction

The modern concept of universal human rights is often presented as an intrinsic truth, an unassailable moral foundation upon which justice, equality, and dignity rest. The United Nations Declaration of Human Rights (UDHR) is considered a cornerstone of this ideology, purportedly designed to protect individuals from oppression and injustice. However, upon closer examination, it is apparent that the notion of human rights is a political fiction rather than an objective reality. It is not derived from natural law, nor is it an empirically observable phenomenon. Besides, natural law itself is just a fiction. Instead, its primary function is for moral posturing. It also serves as a strategic tool that sustains particular social, political, and economic structures. The UDHR, while symbolically powerful, lacks true enforcement and primarily functions as a mechanism for political justification, moral posturing, and bureaucratic self-preservation.

Here I try to articulate the philosophical inadequacy of human rights justifications, the inherent contradictions in their supposed universality, and my conclusion that the true function of the UDHR is for moral and sanctimonious posturing rather than an effective means of improving human behavior. The bottom line is that the UDHR has not done any good (reduced suffering or improved behaviour) and has done harm by justifying the concept of privileges which do not have to be earned. It is not fit for purpose.


The Philosophical Justification for Human Rights: A Fictional Construct

Human rights are often presented as pre-existing entitlements inherent to all individuals, regardless of circumstances or behavior. This idea suggests that every human being is owed certain protections and freedoms simply by virtue of existence. However, a fundamental flaw in this reasoning is that all human experiences, including the recognition or denial of rights, are entirely dependent on the behavior of others. Rights that are “realised” or “enjoyed” are always due to the magnanimity of those who have the power to spoil the party not, in fact, spoiling the party. The concept of rights existing independently of behaviour, ensured either by human enforcement or granted by those with the power to deny the right, is an abstraction rather than an observable reality. Neither the universe nor nature has any interest in this invented concept. The universe does not owe anybody anything. Real human behaviour has no interest in and pays little heed to this fantasy either. Actions taken by humans are always in response to existing imperatives for the human who is acting and not – except incidentally – for the fulfilling of the human rights of others. No burglar or murderer (or IS fanatic or Hamas imbecile) ever refrained from nefarious activities to respect the supposed rights of others. Human behaviour – the actions we actually take – are governed by the imperatives physically prevailing in our minds and bodies at the moment of action. I suggest that an imagined, artificial concept of the “rights” of others is never a significant factor either for action or for preventing action.

Several philosophical justifications have been proposed to support the existence of human rights, but none withstand critical scrutiny. The Kantian perspective, which argues that humans are ends in themselves and deserve dignity, relies on an assumption rather than an empirical foundation. The empirical evidence is, in fact, that the assumption is false. There is no objective reason why human dignity should be treated as an absolute, nor does nature provide any evidence that such dignity is an inherent property of existence. Dignity is not an attribute that carries any value in the natural world. From the slums of the world, to its war torn regions and from children dying of famine in Sudan to the homeless drug addicts of Los Angeles, the idea of inherent human dignity collapses when exposed to the realities of human existence. The utilitarian justification, which claims that human rights create stable and prosperous societies, also fails to prove its intrinsic validity; rather, it only suggests that they may be useful under certain conditions. Moreover, contractual justifications, such as those proposed by John Rawls, assert that rights arise from a hypothetical social contract. But this merely describes a proposed social convention rather than any truth or moral compulsion.

Ultimately, human rights are experienced as a result  – a consequence – of received behaviour. When enjoyed, they are experienced only because they were not violated by someone who could but didn’t. They are not objective or universal principles but merely received experience resulting from the behaviour of others, which itself is a consequence of happenstance. This reality contradicts the popular narrative that rights are universal, unearned entitlements independent of actual, individual behavior. If an individual’s experience of rights depends entirely on the recognition and actions of others, then what is commonly called a “right” is, in practice, a privilege granted by those who choose not to use their capability to ensure or their power to deny it. No child is born with any rights except those privileges afforded by its surrounding society. The blatant lie – and not just a fiction – is that children are born “equal in rights and dignity”. Compared to reality, this aspires at best to being utter rubbish. The “right” of a child to be nurtured is at the behavioural whim of the adult humans exercising power and control over the child. The “right” to property is a privilege granted by those with the power to permit, protect or deny such ownership. The “right” to not be killed is a privilege granted by those having the power to protect or the ability and the inclination to kill. The right to speak freely lasts only as long as those who can, choose not to suppress it. Incidentally, there is no country in the world which does not constrain free speech to be allowed speech. “Free speech” is distinguished by its non-existence anywhere in the world. The imaginary right of free speech has now led to the equally fanciful rights to not be offended or insulted. Good grief! No living thing has, in fact, any “right” to life. The right to live has no force when confronted by a drunken driver or an act of gross incompetence or negligence or natural catastrophes. This right to life has no practical value when life is threatened. The stark reality is that any individual enjoys the received experience of human “rights” only as long as someone else’s behaviour does not prevent it.

A lawyer friend once asked me whether it was my position that a child did not have the right not to be tortured? The answer is that the question is fatally flawed. Such a right – like every other human right – is just a fiction. The question is flawed because the realisation of any “right” (or entitlement or privilege) is itself fictional and lies in a fictional future. Not being tortured is a result of the behaviour and / or non-behaviour of others. This result is a received privilege granted to children by those in positions of power over them. Most children are protected by the adults around them provided, of course, they have a desire to protect them. The “rights” of the children are as nothing compared to the desires of the surrounding adults who have the ability to implement their desires. The reality that so many children are, in fact, mistreated and tortured is because their persecutors declined to grant them the privilege of not being tortured. Furthermore it is the actions of their persecutors which lead  – by omission or by commission – to them being tortured. In practice, having any such “right” is of no value, either for children who are not tortured or for those so unfortunate as to be subjected to vile and cruel behaviour.

Unearned rights are imaginary and they come without any cost or demand on qualifying behaviour. It is inevitable that they have zero practical value when that supposed right is under threat. A so-called right is enjoyed or violated only as a consequence of someone else’s behaviour (including lack of behaviour). The actions involved are driven by what is important for that someone else. The reality is that even every perpetrator of an atrocity has imperatives which drive his behaviour and his actions. The fictional human rights of others – declared or not – are never included among the imperatives governing his actions. They are, in fact, irrelevant to his actions. No robber or murderer or torturer ever refrained from his imperatives for the sake of someone else’s human rights. The fatal flaw in the invented concept of human rights is that real human behaviour is not considered. It is taken to be irrelevant and improvement of actual behaviour is not directly addressed at all. Real human behaviour contradicts the imaginary concept of universal, unearned rights.

The invention of  the UN Declaration of Human Rights (UDHR)

The 1948 UDHR does not explicitly state any measurable objectives such as the reduction of human suffering or the improvement of human behavior. Instead, it tries to be normative. It ends up as a religious text, a moral and aspirational document, setting out principles that define the ideal treatment of individuals by states and societies as seen by guilt-ridden European eyes. By any measure the behaviour of humans towards other humans has not changed very much since WWII (or as it would seem, since we became modern humans). Human conflict and violence and suffering, even adjusted for population, has not declined since WWII. It has, in fact, increased in total volume. The UN Declaration of Human Rights (UDHR) is not linked to any mechanism that enforces its values globally. It’s success is often claimed in principle, but rarely demonstrated in impact. If the world is no less cruel, and probably crueler, after 75 years of pious global rights declarations, what exactly have these declarations achieved?

The UDHR, drafted in the aftermath of World War II, is widely regarded as a historic achievement in the pursuit of justice and equality. However, its origins and functions suggest that it was created primarily to serve political and strategic interests rather than to protect individuals from oppression. One of its primary functions was to rehabilitate the moral standing of Western nations after the atrocities of the 20th century. The Holocaust was – let us not forget – inflicted by Europeans mainly on Europeans. These are the same Europeans whose descendants claimed, and still claim, superior morals and values and civilization to the rest of the world today. The atrocities committed were not just considered allowable but they were also taken, at that time, to be desirable by the standards and values held by some of those same Europeans. To “eradicate the dregs of humanity” was considered the right thing to do in many countries. Coercive eugenics was considered moral by many in Europe. Genocide of such second-rate beings was considered scientifically sound in Europe. The Danes with their Greenlanders, the Swedes and Norwegians with their Sami are cases in point. The Swedish Institute of Race Biology was set up in the 20s and was both the inspiration and the collaborator for the German development of Racial Hygiene theories. This was not some fanatic view. It was part of the mainstream thinking in Europe at the time.

European colonisation was taken as proof of the superiority of the “European race”. The British, for whatever excuses they may make now, were the ones who, knowingly and by omission, allowed 3 – 4 million Indians to die in the Bengal Famine and demonstrated their conviction that native lives had a lower value. The atrocities by France and Belgium and Britain in their colonies in Asia and Africa were no great advertisement for their fine, sanctimonious words at the UN. The concept of “Untermensch” was not held only by the Germans then, and is far from extinct even today. Modern Europeans today commonly still believe the Roma are an inferior race, no matter what their laws may say. The virtue signaling of atonement for past sins, rather than any great surge of humanitarianism, was a key driver of the UN Declaration. Dark skinned peoples are still “Untermensch” in Eastern Europe. The continued bondage of Africans in the Middle East is still slavery in all but name. (But let us not be naive. Race is real and “racism” is alive in every country in todays Asia).

The Holocaust wasn’t some alien invasion. It was Europeans slaughtering certain other Europeans, a homegrown nightmare fueled by ideology, economic collapse, and centuries of tribal hatreds. The UDHR emerged from its ashes, drafted by an unholy coalition of victors and survivors, but its creation wasn’t pure altruism. Western nations, squirming to excuse their own complicity, which had manifested through the 20s and 30s as the wide support for national socialism, appeasement, colonial brutality, of eugenics and of looking aside, needed a moral reset. Hitler had had supporters in every European country (and across the Americas). The UDHR was a way to whitewash themselves and polish their image. A way to say, “We’re the good guys now,” while distancing themselves from the evils of the Soviets and communism. It was less about protecting individuals and more about stabilizing a world order where the West could whitewash reality and claim ethical superiority. Its lofty, sanctimonious words didn’t stop the Cold War’s proxy slaughters or decolonisation’s bloodbaths.

The Holocaust, colonial exploitation, and “war crimes” committed by European powers (victors and vanquished alike) was a massive threat to their assumed moral superiority. By establishing, and being seen to espouse, a “universal” doctrine of rights, Western leaders sought to reshape their global image and provide an ideological – but entirely fictional – justification for their continued dominance. It was sanctimonious, self-righteous and patronising. It was the European elitist’s idea of a catechism for the less enlightened world to follow blindly. After 75+ years of the UDHR, could a Holocaust happen again in Europe? Of course it could. Of course it can. Looking at Kosovo, of course it did! Wherever conflict is now taking place, whether in Gaza or Ukraine or in the Yemen or the Sudan, observing the human rights of the enemy are of no great consequence in the strategic planning of either side.

The UDHR is a pious declaration rather than a legally binding treaty, which means that nations can violate its principles without facing direct consequences. It has been repeatedly violated since the day it was written by its own authors and signatories; in Algeria (by France), in Africa and Asia by the UK, in Vietnam (by the U.S.), in Latin America and in Iraq, Syria, China, Russia and Myanmar. Countries that routinely engage in torture, mass surveillance, political repression, and genocide frequently sign human rights agreements while simultaneously disregarding their content. Ultimately behaviour is by individuals. That a loose promise by a government of a country could bind all of its people, who it does not necessarily represent, is pie in the sky. Claiming universality of values, which patently does not exist, devalues the Declaration as being delusional. The lack of enforcement renders the declaration largely symbolic, exposing the contradiction between its universal claims and its practical impotence.

The Failure of the UDHR

Despite its elevated status in international discourse, the Universal Declaration of Human Rights (UDHR) is entirely made up and has no sound philosophical foundations. It is not observed anywhere in the natural world and lacks empirical validation as a force for reducing human suffering or curbing atrocity. Much of the legislation introduced in countries under the “Human Rights” label could have been better introduced in more appropriate local forms. I question the normative power claimed for the UDHR. I can find no way to measure, and no evidence of, the reduction of suffering or the improvement of human behaviour or the reduction of man’s inhumanity to man since the 1948 declaration. The data suggest that rights discourse has had no measurable preventative effect at all. Instead, violations remain persistent, and have only increased in severity and scale. We find that events of humans doing harm to other humans have more than kept pace with the population growth. According to the UN’s own Human Rights Violations Index and data from the Office of the High Commissioner for Human Rights (OHCHR), global violations have increased in absolute terms since 1948. So the bottom line is that the incidence of suffering events have increased by about a factor 3 since 1948. In 2024, the UN verified 41,370 grave violations against children in conflict zones (a 25% increase year-on-year), including 22,495 children killed, wounded, recruited, or denied aid (docs.un.org, theguardian.com). Though it only goes back some 30 years, there has never been a year where this metric has declined. The number of individual complaints lodged with the UN Human Rights Committee has reached an all‑time high, and censorship, repression, and legal harassment are more systematic than ever (universal-rights.org, ohchr.org).

Simultaneously, the human rights industry has grown unchecked. Estimates suggest over 48,000 full-time “professionals” are directly engaged globally in rights-related work, expanding at an annual rate of 5%. Including the ICC and international courts the annual budget is around $4 – 5 billion USD per year. This industry relies on crises, where its own survival depends on the perceiving of problems (real or imagined), and the illusion of progress rather than real change. If human rights issues were truly being resolved, many of these institutions would no longer be needed. They should be working towards their own irrelevance. If human rights were improving the industry ought to be shrinking – not growing at 5% per year. Success is measured not by any measure of reduction of suffering or of improving behaviour, but by how much is spent on themselves and in ensuring an increased budget for the next year. With no performance-based metric by which this sector can evaluate its own effectiveness, it measures only what it spends and the number of declarations, treaties, and reports it produces. Its expansion resembles bureaucratic self-interest more than social remedy.

Philosophically, the foundation of “universal rights” has long been contested. Jeremy Bentham dismissed natural rights as “nonsense upon stilts,” rejecting their grounding outside positive law. I take the view that law is made by society, each for, and suited to, itself. It must be grounded locally. Bottom up, not top down. Universal law as I have written about earlier is a mirage. Alasdair MacIntyre also observed that invoking rights “is like invoking witches or unicorns”, a secular invocation of metaphysical constructs without demonstrable existence (After Virtue, 1981). Historically, human rights interventions have always failed, and sometimes spectacularly, under the weight of political selectivity and cultural prejudices. Whether Rwanda or Darfur or Syria or Myanmar or Yemen, moral posturing, rather than any conflict resolution is the primary objective.

What value, then, does the UDHR have?

  • It does not constrain, since non-state actors and authoritarian regimes and even individuals  routinely ignore it without consequence.
  • It does not protect, and the areas where violations are worst (Sudan, Syria, Gaza, Yemen) are just those areas where the UDHR is devoid of respect and effectiveness.
  • It does not deter and there is no rational mechanism by which the UDHR can have any impact on the resorting to violence, the outbreak of war or the committing of mass atrocities (intentionally or not).
  • It is not universal, is seen to be skewed in its values and often rejected or ignored whenever inconvenient by cultural and political parties

The function of this industry is not, it would seem, to eliminate human rights violations, nor to reduce suffering or improve human behaviour, but to create a controlled narrative that manages public perception. By providing the illusion of accountability and reform, the human rights industry serves primarily as a panacea.

To reduce suffering or to change behaviour?

There is a glaring gap between the lofty tone of the UDHR and the reality of human behavior. The declaration does not describe how rights will be enforced. It assumes that widespread recognition of rights will somehow influence behavior. It is a hope, not a mechanism. It contains no theory of human psychology or motivation. So while the spirit of the UDHR implies a desire to reduce suffering and encourage more humane behavior, it lacks both strategy and realism in achieving that.

People are led to believe that the world is moving toward justice and equality, even as human suffering, war, and exploitation continue unabated. Human behaviour changes only when humans perceive that to change is of greater benefit than not changing. The reality is that even when actions cause collateral harm, no one refrains from his (or her) chosen actions for the purpose of respecting the imaginary rights of those who may be harmed. They may refrain for fear of punishment or retaliation or because they chose to do something else, but never for the sake of respecting imaginary rights. It is the idea of being entitled to unearned privileges which is fundamentally unsound – even sick. It is, in fact, where entitlement culture and its ills begin. If human behaviour is to be addressed it can only be done locally not with futile, pious, universal declarations. Human values are local not global. The value of human life varies from local society to local society. The drivers of human action are local, not some pious, universal fiction. Changing behaviour can only begin locally – in accordance with local values and mores.

The envelope of possible human behaviour is set by our genes and probably has not changed in 50,000 years. The quantity of bad behaviour at any given time is just the rate of bad behaviour multiplied by population. The rate of bad behaviour for dense, industrialised urban environments is no doubt different to that for hunter-gatherers. But it has been fairly constant for at least the last 5,000 years since the earliest legal codes were framed to control behaviour in societies. Even the codes of Ur-Nammu (2,100 BCE) or Hammurabi (1,750 BCE) reflect societies dealing with murder, theft, cruelty, sexual misconduct, and violence. They dealt with precisely the same behaviour that modern codes try to address. Codes of law (and law enforcement arrangements) have been used for at least 5,000 years to manage existing societies, but they have not changed the fundamentals of human behaviour at all. The crime and punishment needs for the functioning of a society rarely have any impact on fundamental human behaviour. We should note that a Code of Law and legal systems are governance tools, not human reprogramming mechanisms. They do not remove the ability or the impulse to do harm. They merely deter some with punishment, redirect some through social conditioning, and repress others with institutional force. Codes of Law constrain some unwanted behaviour and help societies to function but they do not change human behaviour. They do not even try to. Human nature itself does not evolve on civilizational timeframes.

More perniciously, the UDHR has helped cultivate a culture of entitlement divorced from merit, responsibility, or behaviour. By declaring rights as universal and unearned, it has promoted the dangerous fiction that dignity, security, and privilege are birthrights requiring no reciprocal obligation. “Being born equal in rights and dignity” is so blatant a falsehood that it puts the sincerity of the document authors in doubt. This moral dilution has eroded the foundations of duty, effort, and earned respect that once underpinned functioning societies. The bases of civic behaviour (duty, responsibility, … ) have been badly undermined.

Rather than preventing oppression, the human rights framework often provides the form, the illusion, of improvement without having any substance. This psychological function of human rights discourse benefits those in power by fostering passivity and compliance. The UDHR is used to provide a perception of actions as a means of sedating societies not for reducing suffering or improving behaviour.

Conclusion

The fiction of universal human rights is maintained not because it reflects reality but because it serves political, bureaucratic, and ideological functions. The UDHR was crafted as a tool for Western moral rehabilitation after World War II, but its lack of enforcement has rendered it a symbolic rather than a document for actions. Human rights are invoked selectively, as a political tool rather than for achieving actual improvement. Furthermore, the human rights industry sustains itself by perpetuating crises rather than resolving them, and the narrative of inevitable progress pacifies individuals rather than inspiring real change.

Since the UDHR was framed, human behaviour has not changed one iota in consequence. Human suffering has increased largely in line with population increase, but where the rate of doing harm to others has been either unaffected or made slightly worse by the declarations. Certainly the declarations have not reduced the rate of humans doing harm to humans. The bottom line is that the UDHR does not reduce suffering and it does not even address human behaviour. The UDHR, in real conditions of war, insurgency, or factional conflict is little more than a legal fiction and a moral “comfort blanket”. It survives in courtrooms, classrooms, and NGOs, but disappears from battlefields, street protests, from all large crowds and assemblies and any refugee camps.

The question, then, is not whether human rights exist in any real sense (they do not), but rather, who benefits from the perpetuation of the human rights illusion? Certainly suffering is not reduced and human behaviour is unaddressed. The primary beneficiary of the human rights industry, it seems to me, is the human rights industry.

In the long run human behaviour will change only along with local societies as they develop and will reflect the imperatives of those local societies. The global picture only emerges as a consequence as a summation of local changes. Behaviour and behavioural change cannot be imposed top down. It can only happen from the bottom up because it lies ultimately with individuals.


Is the Principle of Least Resistance the Zeroth Law of Being?

June 22, 2025

The underlying compulsion

Is thrift, parsimony, a sort of minimalism, part of the fabric of the universe?

Occam’s razor (known also as the principle of parsimony) is the principle that when presented with alternative explanations for the same phenomenon, the explanation that requires the fewest assumptions should be selected. While Occam’s razor is about how to think and describe phenomena, I am suggesting that parsimony of action, the path of least resistance is deeply embedded in causality and in all of existence.

Why is there something rather than nothing? Why does the universe exist? The answer is all around us. Because it is easier to be than not to be. Because at some level, in some dimension, in some domain of action and for some determining parameter, there is a greater resistance or opposition to not being than to being. Why does an apple fall from a tree? Because there is, in the prevailing circumstances, more resistance to it not falling than in falling. At one level this seems – and is – trivial. It is self-evident. It is what our common-sense tells us. It is what our reason tells us. And it is true.

It also tells us something else. If we are to investigate the root causes of any event, any happening, we must investigate the path by which it happened and what was the resistance or cost that was minimised. I am, in fact, suggesting that causality requires that the path of sequential actions is – in some domain and in some dimension – a thrifty path.

A plant grows in my garden. It buds in the spring and by winter it is dead. It has no progeny to appear next year. Why, in this vast universe, did it appear only to vanish, without having any noticeable impact on any other creature, god, or atheist? Some might say it was chance, others that it was the silent hand of a larger purpose. But I suspect the answer is simpler but more fundamental. The plant grew because it was “easier”, by some definition for the universe, that it grow than that it not grow. If it had any other option, then that must have been, by some measure, more expensive, more difficult.

In our search for final explanations – why the stars shine, why matter clumps, why life breathes – we often overlook a red thread running through them all. Wherever we look, things tend to happen by the easiest possible route available to them. Rivers meander following easier paths and they always flow downhill, not uphill. Heat flows from warm to cold because flowing the other way needs effort and work (refrigerator). When complexity happens it must be that in some measure, in some domain, staying simple faces more resistance than becoming complex. How else would physics become chemistry and form atoms and molecules? Why else would chemistry become biochemistry with long complex molecules? Something must have been easier for biology and life to be created than to not come into being. The bottom line is that if it was easier for us not to be, then we would not be here. Even quantum particles, we are told, “explore” every possible path but interfere in such a way that the most probable path is the one of least “action”. This underlying parsimony – this preference for least resistance – might well deserve to be raised to a status older than any law of thermodynamics or relativity. It might be our first clue as to how “being” itself unfurls. But is this parsimony really a universal doctrine or just a mirage of our imperfect perception? And if so, how far does it reach?

We can only elucidate with examples. And, of course, our examples are limited to just that slice of the universe that we can imperfectly perceive with all our limitations. Water finds the lowest point (where lowest means closest to the dominant gravitational object in the vicinity). Light bends when it moves from air into glass or water, following the path that takes the least time. Time itself flows because it is easier that it does than it does not. A cat, given the choice between a patch of bare floor and a soft cushion, unfailingly selects the softer path. It may seem far-fetched, but it could be that the behaviour of the cat and the ray of light are not just related, they are constrained to be what they are. Both are obeying the same hidden directive to do what costs the least effort, to follow a path of actions presenting the least resistance; where the minimisation of effort could be time, or energy, or discomfort, or hunger, or something else.

In physics, this underlying compulsion has been proposed from time to time. The Principle of Least Action, in physics, states that a system’s trajectory between two points in spacetime is the one that minimizes a quantity called the “action”. Action, in this context, is a quantity that combines energy, momentum, distance, and time. Essentially, the universe tends towards the path of least resistance and least change. Newton hinted at it; Lagrange and Hamilton built it into the bones of mechanics. Feynman has a lecture on it. The principle suggests that nature tends to favor paths that are somehow “efficient” or require minimal effort, given the constraints of the system. A falling apple, a planet orbiting the Sun, a thrown stone: each follows the path which, when summed over time, minimizes an abstract quantity called “action”. In a sense, nature does not just roll downhill; it picks its way to roll “most economically”, even if the actual route curves and loops under competing forces. Why should such a principle apply? Perhaps the universe has no effort to waste – however it may define “effort” – and perhaps it is required to be thrifty.

The path to life can be no exception

Generally the path of least resistance fits with our sense of what is reasonable (heat flow, fluid flow, electric current, …) but one glaring example is counter-intuitive. The chain from simple atoms to molecules to complex molecules to living cells to consciousness seems to be one of increasing complexity and increasing difficulty of being. One might think that while water and light behave so obligingly, living things defy the common-sensical notion that simple is cheap and complex is expensive. Does a rainforest  – with its exuberant tangle of vines, insects, poisons, and parasites  – look like a low-cost arrangement? Isn’t life an extremely expensive way just to define and find a path to death and decay?

Living systems, after all, locally do reduce entropy, they do build up order. A cell constructs a complicated molecule, seemingly climbing uphill against the universal tendency for things to spread out and decay. But it does so at the expense of free energy in its environment. The total “cost”, when you add up the cell plus its surroundings, still moves towards a cheaper arrangement overall and is manifested as a more uniform distribution of energy, more heat deposited at its lowest temperature possible. Life is the achieving of local order paid for by a cost reckoned as global dissipation. Fine, but one might still question as to why atoms should clump into molecules and molecules into a cell. Could it ever be “cheaper” than leaving them separate and loose? Shouldn’t complex order be a more costly state than simple disorder? In a purely static sense, yes. But real molecules collide, bounce, and react. Some combinations, under certain conditions, lock together because once formed they are stable, meaning it costs “more” to break them apart than to keep them together. Add some external driver – say a source of energy, or a catalyst mineral surface, or a ray of sunlight – and what might have stayed separate instead finds an easier path to forming chains, membranes, and eventually a primitive cell. Over time, any accessible path that is easier than another will inevitably be traversed.

Chemistry drifts into biochemistry not by defying ease, but by riding the easiest local, available pathway. It is compulsion rather than choice. Action is triggered by the availability of the pathway and that is always local. Evolution then – by trial and error – makes the rough first arrangement into a working organism. Not a perfectly efficient or excellent organism in some cosmic sense, but always that which is good enough and the easiest achievable in that existential niche, at that time. One must not expect “least resistance” to provide a  perfection which is not being sought. A panda’s thumb is famously clumsy – but given the panda’s available ancestral parts, it was easier to improvise a thumb out of a wrist bone than to grow an entirely new digit. Nature cuts corners when it is cheaper than starting over.

Perhaps the reason why the spark of life and the twitch of consciousness evade explanation is that we have not yet found – if at all we are cognitively capable of finding – the effort that is being minimised and in which domain it exists. We don’t know what currency the universe uses and how this effort is measured. Perhaps this is a clue as to how we should do science or philosophy at the very edges of knowledge. Look for what the surroundings would see as parsimony, look for the path that was followed and what was minimised. Look for the questions to which the subject being investigated is the answer. To understand what life is, or time or space, or any of the great mysteries we need to look for the questions which they are the answers to.

Quantum Strangeness: The Many Paths at Once

Even where physics seems most counter-intuitive, the pattern peeks through. In quantum mechanics, Richard Feynman’s path integral picture shows a particle “trying out” every possible trajectory. In the end, the most likely path is not a single shortest route but the one where constructive interference reinforces paths close to the classical least-action line. It also seems to me – and I am no quantum physicist – that a particle may similarly tunnel through a barrier, apparently ignoring the classical impossibility. Yet this too follows from the same probability wave. The path of “least resistance” here is not some forbidden motion but an amplitude that does not drop entirely to zero. What is classically impossible becomes possible at a cost which is a low but finite probability. Quantum theory does not invalidate or deny the principle. It generalizes it to allow for multiple pathways, weighting each by its cost in whatever language of probability amplitudes that the universe deals with.

It is tempting to try and stretch the principle to explain everything, including why there is something rather than nothing. Some cosmologists claim the universe arose from “quantum nothingness”, with positive energy in matter perfectly balanced by negative energy in gravity. On paper, the sum is zero and therefore, so it is claimed, no law was broken by conjuring a universe from an empty hat. But this is cheating. The arithmetic works only within an existing framework. After all quantum fields, spacetime, and conservation laws are all “something”. To define negative gravitational energy, you need a gravitational field and a geometry on which to write your equations. Subtracting something from itself leaves a defined absence, not true nothingness.

In considering true nothingness – the ultimate, absolute void (uav) – we must begin by asserting that removing something from itself cannot create this void. Subtracting a thing from itself creates an absence of that thing alone. Subtracting everything from itself may work but our finite minds can never encompass everything. In any case the least resistance principle means that from a void the mathematical trick of creating something here and a negative something there and claiming that zero has not been violated is false (as some have suggested with positive energy and negative gravity energy). That is very close to chicanery. To create something from nothing demands a path of least resistance be available compared to continuing as nothing. To conjure something from nothing needs not only a path to the something, but also a path to the not-something. Thrift must apply to the summation of these paths otherwise the net initial zero would prevail and continue.

The absolute void, the utter absence of anything, no space, no time, no law, is incomprehensible. From here we cannot observe any path, let alone one of lower resistance, to existence. Perhaps the principle of least resistance reaches even into the absolute zero of the non-being of everything. But that is beyond human cognition to grasp.

Bottom up not top down

Does nature always find the easiest, global path? Perhaps no, if excellence is being sought. But yes, if good enough is good enough. And thrift demands that nature go no further than good enough. Perfect fits come about by elimination of the bad fits not by a search for excellence. Local constraints can trap a system in a “good enough” state. Diamonds are a textbook example. They are not the lowest-energy form of carbon at the Earth’s surface, graphite is. Graphite has a higher entropy than diamond. But turning diamond into graphite needs an improbable, expensive chain of atomic rearrangements. So diamonds persist for eons because staying diamond is the path of least immediate, local resistance. But diamonds will have found a pathway to graphite before the death of the universe. The universe – and humans – act locally. What is global follows as a consequence of the aggregation, the integral, of the local good enough paths.

Similarly, evolution does not look for, and does not find, the perfect creature but only the one that survives well enough. A bird might have a crooked beak or inefficient wings, but if the cost of evolving a perfect version is too high or requires impossible mutations, the imperfect design holds. A local stability and a local expense to disturb that stability removes a more distant economy from sight.

Thus, the principle is best to be stated humbly. Nature slides to the lowest, stable, accessible valley in the landscape it can actually access, not necessarily the deepest valley available.

A Zeroth Law or just a cognitive mirage

What I have tried to articulate here is an intuition. I intuit that nature, when presented with alternatives is required to be thrifty, to not waste what it cannot spare. This applies for whatever the universe takes to be the appropriate currency – whether energy, time, entropy, or information. In every domain where humans have been able to peek behind the curtain, the same shadow of a bias shimmers. The possible happens, the costliest is avoided, and the impossible stays impossible because the resistance is infinite. In fact the shadow even looks back at us if we pretend to observe from outside and try and lift the curtain of why the universe is. It must apply to every creation story. Because it was cheaper to create the universe than to continue with nothingness.

It may not qualify as a law. It is not a single equation but a principle of principles. It does not guarantee simplicity or beauty or excellence. Nature is perfectly happy with messy compromises provided they are good enough and the process the cheapest available. It cannot take us meaningfully to where human cognition cannot go, but within the realm of what we perceive as being, it might well be the ground from which more specific laws sprout. Newtons Laws of motion, Einstein’s relativity, Maxwell’s equations and even the Schrödinger equation, I postulate, are all expressions of the universe being parsimonious.

We can, at least, try to define it: Any natural process in our universe proceeds along an accessible path that, given its constraints, offers the least resistance compared to other possible paths that are accessible.

Is it a law governing existence? Maybe. Just as the little plant in my garden sprouted because the circumstances made it the easiest, quietest, cheapest path for the peculiar combination of seeds, soil, sunlight, and moisture that came together by chance. And in that small answer, perhaps, lies a hint for all the rest. That chance was without apparent cause. But, that particular chance occurred because it was easier for the universe – not for me or the plant – that it did so than that it did not. But it it is one of those things human cognition can never know.


Boundaries of Knowledge: Natural, Supernatural, and Unnatural

June 14, 2025

Our finite view of a slice of a boundless universe

Every morning, the sun “rises.” It is foundational to all life on earth. It is not just a fundamental part of our daily experience, it defines our days and our lives. Yet it is so expected, so certain that we rarely give it a second thought. For at least as long as we have been Homo sapiens, this inexplicable, regular event used to be imbued with profound mystery and was attributed to divine forces or cosmic beings. The sun’s regular, predictable journey across the sky was a phenomenon where its causes could not be explained by the laws of nature of that time.

Then came Copernicus and Newton and later Einstein and we now claim to understand the Earth’s rotation and its orbit around the sun. The “rising” of the sun every day is just a trick of perspective. We can predict it with incredible precision. It is the common belief that the sun’s daily appearance is entirely “natural” and “fully explained” by the laws of nature revealed to us by the scientific method.

But this widely held belief is wrong and overlooks a deeper truth.

Our brains are finite, and our senses, while remarkable, are but a few of the many evolved on Earth. We perceive only a narrow band of the electromagnetic spectrum, hear only certain frequencies, and are blind to magnetic fields, sonar, or infrared vision that other creatures can detect. We have no idea of what senses we do not have. Wherever we look in time and space we see no bounds, we see no edge. This application of a finite cognition to a boundless universe is inherently limited. It means our true observations are always incomplete, partial, and imperfect perceptions. It is inevitable that there are things we know, things knowable which we do not know, and, most importantly, things we simply cannot know. (I have described the the tripartite classification of knowledge elsewhere: known, unknown but knowable, and unknowable)

This leads me to what I believe is a crucial skeleton on which to hang the flesh of reality:

  1. Everything observed or experienced is real and natural.
  2. Nothing unnatural is real and thus the unnatural can never be, or have been, observed.
  3. The supernatural (supra-natural) is that which is observed but cannot be explained by the known laws of nature. The inexplicability could be temporary or it could be permanent if the explanation lies in the region beyond human cognition.

My foundational premise is that anything truly observed exists within the fabric of our reality, and it is real and it is natural. Often people refer to the supernatural when they mean the unnatural but this is just being sloppy with language. The distinction is that the supernatural has to be first observed and then determined to be inexplicable based on the known laws of nature. The unnatural can never be observed and is always fiction (no matter how entertaining).

The enduring supernatural in knowledge (and science)

Let’s revisit the sun. While we can calculate the effects of gravity with breathtaking accuracy, we still haven’t a clue as to why gravity exists, or what it fundamentally is. We describe its behavior, but its intrinsic nature remains an enigma. The very concept of “gravity,” while allowing for precise calculations of its effects, is a placeholder for a phenomenon that we observe and measure, yet cannot explain. Therefore, gravity itself is a supernatural phenomenon.
This pattern repeats across the frontiers of modern science, showing how “scientific explanations” often only shift us to new supernatural things. The state of knowledge and knowledge seeking today reveals that the foundational assumptions and boundary conditions for all knowledge seeking – including the scientific method, reasoning, and logical discourse – are themselves supernatural.
The Stuff of All Matter and Quantum Waves: We describe particles and waves, their interactions, and the quantum fields from which they arise. Yet, what is the fundamental ‘stuff’ that constitutes a quantum field or a fundamental particle? Why these particular properties? Why does quantum mechanics work the way it does? This fundamental substratum of reality remains profoundly supernatural.
The Big Bang Singularity: As science traces the universe back to its very beginning, we arrive at the Big Bang singularity – a point where known physics breaks down. What happened before the Big Bang? What caused it? These questions extend beyond the reach of our current physical laws, pushing the Big Bang itself into the supernatural realm of observed phenomena that are currently inexplicable.
Black Holes: These extreme gravitational wells are predicted by Einstein’s relativity, yet their singularities represent another boundary where our laws break down. What is inside a black hole beyond our conceptual and physical ability to observe or calculate? The singularity at their heart, and indeed the event horizon’s fundamental nature, remains supernatural.
Dark Energy and Dark Matter: Constituting the vast majority of the universe’s mass and energy, these entities influence cosmic structure and expansion. We observe their gravitational effects, but their identity, composition, and underlying ‘why’ remain a profound mystery, pushing them firmly into the supernatural category of observed phenomena that resist explanation.
The Nature of Truth, Causality, Time, Space, Life, and Consciousness: These are not just scientific puzzles, but the very boundary conditions upon which all our inquiries are built. We observe and experience them directly, yet their ultimate nature and “why” remain fundamentally inexplicable, thus rendering them supernatural.

This constant shifting of explanations, where solving one mystery often reveals deeper, more fundamental ones that remain inexplicable, underscores my main thesis that as our knowledge progresses, it inevitably encounters phenomena that, while observed and real, may forever remain in the realm of the supernatural. Whenever a cosmologist or physicist invokes random events they are invoking – by definition – events without cause and such events lie outside the laws of nature. Truly random (causeless) events are always supernatural. The scientific method often uses placeholders (like “dark energy” or “Big Bang”) when it reaches these supernatural stops, in the hope that their inexplicability is merely temporary. But we can never know if an inexplicability is temporary or permanent. (When it is claimed that “we don’t know but we know it isn’t that”, sloppy language has extended to sloppy thinking).

The unobservable unnatural

In contrast to the natural and supernatural, the unnatural represents that which cannot be observed. It is the realm of fiction, of true impossibility based on the consistent rules of our observed reality. An example would be cows jumping over the moon. While we can imagine it, it fundamentally violates the known physical laws of gravitation and biology, making it unobservable in our natural world. Similarly, a true perpetual motion machine that creates energy from nothing would be unnatural because it fundamentally contradicts the laws of thermodynamics, not merely because it’s currently unexplained. Such things cannot exist or be observed. “Supernatural beings” is really sloppy language since they cannot be observed – ever – and what is meant is unnatural beings.

The enduring quest

Acknowledging these boundaries doesn’t mean we stop seeking. Quite the opposite. It fosters intellectual humility and refines our quest. We continue to unravel the complexities of the knowable natural world, pushing the frontiers of science. And in doing so, we gain a deeper appreciation for the profound supernatural mysteries that define the ultimate limits of our understanding – mysteries that, while observed and real, may forever remain beyond our full grasp. This continuous seeking is a dance between discovery and enduring enigma. It is the essence of the human condition. It lies at the core of the scientific method and of all knowledge seeking. It ensures that the universe will always hold more wonders than our finite minds can unravel, keeping our sense of awe forever alive.


Related:

The Great Mysteries: Known, Knowable, and Unknowable Foundations of Philosophy

Knowledge, Truth, and Reality: Attributes of Consciousness in an Anti-Realist Framework


What Can We Truly Know? A Practical Guide to Truth for Finite Minds

June 1, 2025

Truth feels like it should be simple: something is true if it matches reality.

But as soon as we ask how we know something is true – or whether we can know – we realize the ground shifts under our feet. We have finite minds, limited senses, and we’re trying to understand an endless universe from the inside. We do not know what senses we do not have. The only thing we can be certain of is that whatever we observe of the surrounding universe is partial and incomplete. And we do not know what we cannot know. How do we define truth from such a small vantage point?

This is an attempt to build a definition of truth that respects those limits while still giving us something reliable to live by.


Our senses have evolved on earth to detect conditions on earth and so help our journey of survival and reproduction. Our minds evolved to help us survive, not to decode the cosmos. We’re built to spot patterns, avoid danger, find food, and navigate social groups – not to unravel quantum mechanics or grasp the shape of space-time. Yet we have been so successful at survival that we have had time to consider other things than survival.  We have evolved language and thinking and have earned the freedom to demonstrate our creativity. We have built tools, systems, and cities and vehicles. We have developed the sciences and philosophy and the arts such that we are by far the most successful species on the planet.  Human cognition too has grown and far beyond our original limits. But even with all that, our understanding is still partial, still incomplete. Always will be. Our cognitive limits are ever-present. For example, we still cannot comprehend why gravity must be or why existence is or time flows or life and consciousness arise. There are things – perhaps – that we cannot know.

That means truth, for us, has to be redefined. Not as an unreachable absolute, but as something we can approach and refine, even if we never fully arrive. Consider all truth in the universe to be a giant landscape. We only see a tiny part of that. From that which we can see our truths are what we call knowledge.  That which is knowledge for us is always true (provisionally). A lie is disqualified from being knowledge. We perceive knowledge to come in three forms:

1. What We Know

These are the things we’ve tested, confirmed, and rely on – like gravity pulling objects down or the fact that ice melts above 0°C. These are our working truths. They could be revised, but they serve us well for now.

2. What We Could Know

These are truths we haven’t reached yet, but potentially could. Maybe we need better tools or smarter questions. The cure for a disease. The cause of consciousness. A deeper law of physics. These are knowable truths – just not yet known.

3. What We Can Never Know

Some truths lie forever beyond human perception or understanding. Perhaps they’re hidden by our cognitive limitations or the boundlessness of space and time. Or maybe our brains are simply incapable of grasping them – like trying to teach calculus to a dog. These are the unknowable truths – still real, just what we cannot know.

If that’s our playing field, then a more grounded way to define truth is:

Truth is what fits with what we know so far, helps us predict what happens next, and holds up when tested.

This isn’t some eternal, absolute cosmic Truth-with-a-capital-T. It’s the kind of truth we can use, refine, and build on. It works in science. It works in everyday life. And it keeps us honest. We are truth-seekers, not truth-holders. No matter how clever we get, we’ll never know everything. That’s not failure – that’s the condition of being human with a finite brain and limited senses. But we can keep trying and keep improving our aim. We can ask better questions, challenge assumptions, discard broken ideas, and refine our hypotheses and our theories. The scientific method does exactly this. So does philosophy. So do our creative arts though truths are very strictly subjective. So does any kind of honest thinking. Not to own the truth, but to move closer to it.

Truth Is a direction, not a destination. It is the seeking for the truth that matters especially since any absolute truth is beyond out cognition. We can move toward it, sometimes fast, sometimes slow, but we never quite arrive. And that’s okay. What matters isn’t reaching a final answer. What matters is that we seek.

We live in a universe full of mystery. The best we can do is stay curious, stay humble, and keep searching.

We are seekers after truth not its owners.


Knowledge, Truth, and Reality: Attributes of Consciousness in an Anti-Realist Framework

April 22, 2025

This follows on from my earlier post about knowledge.

This essay argues that knowledge, truth, and reality are attributes of consciousness, requiring a purposeful, self-aware mind to transform raw data into meaning. Countering realist and Cartesian assumptions, this post adopts an anti-realist framework which emphasizes consciousness’s role, urging epistemic humility and responsible engagement with constructed realities.


Introduction

Consider our famous tree which falls in a forest. The trivial question is whether there is a sound when there is no one to hear? But let us ask instead what is experienced by an intelligent observer who just happens to be around. This question opens up the nature of knowledge, truth, and reality, revealing their dependence on a conscious mind. I argue that these are attributes of consciousness, created when a self-aware, purposeful mind defines and interprets phenomena. Existence—the brute fact of all things being—may stand alone, like air pressure vibrations in a forest, but reality, truth, and knowledge require an observer to define specific things, such as a tree’s fall. Realists claim the universe exists and is real intrinsically, conflating existence with reality, but this begs, “Known by who?”—exposing the need for a conscious knower. Knowledge arises only when consciousness contextualizes defined phenomena, truth appears as consciousness judges their certainty, and reality takes shape as meaning is constructed, all within the mind. The grey amorphous splodge of everything which is in the universe may encompass all existence, but it defines no things; only observers carve out realities. This anti-realist perspective rejects absolute truth and philosophical objectivity, emphasizing diverse perspectives—humans understanding the sun scientifically, crows sensing it instinctively—each defining distinct realities, limited by the unknowable. Through definitions, epistemic limits, and implications, this essay explores how consciousness shapes understanding. Knowledge abides only in a consciousness which has a need to define what is known. The tree-falling analogy anchors this, showing existence to be diffuse and undefined until a mind makes it real, urging us to see knowledge, truth, and reality as products of consciousness.

Definitions

What does it mean to know, to judge true, or to call something real? These terms hinge on a critical and crucial distinction between existence – the universe’s raw, undefined splodge – and the reality, knowledge, and truth, which can only be carved out of existence by a conscious mind.

  • Existence is the brute fact of all things being—particles, waves, space, vibrations, stars, trees, winds, crows—swirling amorphously as the universe’s grey background, unnamed, undefined and needing no observer.
  • Data are discrete slices of existence, like air pressure vibrations in a forest, raw and shapeless until a mind touches them.
  • Information emerges when senses and interpreting brains select and shape data into patterns, such as sound waves rippling through an ear.
  • Knowledge is born when a conscious mind defines these patterns, naming them with certainty: “A tree fell.”
  • Cognition—perception, memory, reasoning—builds the bridge from data to information.
  • Consciousness is cognition with self-awareness, the spark that defines things and weaves knowledge.
  • Purpose is the drive, whether deliberate study or survival’s instinct, pushing a mind to define and learn.
  • Truth is a judgment, a mind declaring a defined thing certain, like “a tree fell is true,” meaningless without someone to say it.
  • Objectivity is minds agreeing, as in science’s shared truths, not a reality beyond them—else, “Intrinsic to what?”
  • Reality is meaning carved from existence, a defined thing like a forest event, not a universal fact.

This anti-realist view clarifies how knowledge, truth, and reality can only spring from a mind which contemplates and tries to define the bits and pieces of existence’s diffuse mass. The brute fact of all that is, just is and does not need to name or identify its own bits and pieces or make judgements about them. Realists conflate existence with reality, but pressure vibrations in the air do not sing until a conscious observer judges them to be a sensation called sound.

The Limits of Knowing: Known, Knowable, and Unknowable

Picture the universe as a vast, amorphous, undefined sea of existence. What can we know from it? Knowledge splits into three realms: the known, the knowable, and the unknowable. The known holds what we’ve defined—gravity’s pull, a tree’s fall—crafted by observation. The knowable waits to be defined, like distant stars or hidden creatures, reachable with better tools or sharper minds. The unknowable is existence undefined—quantum flickers, the universe’s deep nature—forever beyond our grasp. This divide shows knowledge and truth need a mind to carve specific things from existence’s splodge. Realists proclaim a universe real in itself, but “Known by who?, Real to who?” Defining the sun reveals this: humans name it a star, blazing with fusion; crows sense a warm light, guiding flight. Each reality is partial, missing existence’s undefined depths, like quantum secrets. The unknowable allows no mind to be able to capture all, shattering realism’s dream of one true reality. Knowledge lives in what we define, shaped by consciousness, not floating in existence. A tree’s vibrations are just there until an observer calls them a sound or a fall, crafting a reality. This anti-realist lens, seeing reality as it is defined, not as a given, leads us to explore how consciousness transforms bits of existence into knowledge.

From Data to Knowledge: The Conscious Process

Consider again our tree, crashing in the forest. What does an intelligent observer experience? Vibrations ripple through the air—existence’s brute fact, undefined and silent. These are data, raw scraps of the universe’s meaningless, lonely splodge. The eye perceives nothing but an ear catches them, cognition spins them into information—sound waves with rhythm and pitch. Then consciousness, purposeful and self-aware, defines them: “A cracking sound”, “A tree fell.” This is knowledge, born when a mind carves a specific thing from existence. Realists insist the fall is real in itself, but that cannot be. “What is a tree?, What is air? Known by who?” Vibrations aren’t a tree’s fall until defined—else, “Intrinsic to what?” A human observer might name it a forest event, mapping its cause; a crow, hearing danger, defines it as a threat. Each reality springs from defining selected bits and pieces of existence, both enlightened and limited by senses and constrained by the unknowable, like the molecular dance triggered by the tree which fell. What the human selects of the data available and what the crow selects are different. Knowledge isn’t in the universe’s raw being but in a mind’s act of definition. Animals or AI might process information, but only a conscious mind, driven by purpose—curiosity or survival—defines knowledge as humans do. No book or computer ever contained knowledge. A crow’s instinct doesn’t name the fall; AI’s outputs don’t reflect knowledge. Only consciousness, shaping existence into defined things, creates meaning, setting the stage for judgments of truth value.

Knowledge and Truth: A Mind-Dependent Relationship

What makes a belief knowledge, and what makes it true? Observe that belief – no matter how enhanced (justified, true, etc.) – can never achieve a truth value of 1. That requires it no longer be a belief. Knowledge is a belief held with a subjective confidence, defined and justified, like “The sun rises” seen daily. Truth is the mind’s judgment that a defined thing aligns with reality—but reality itself is carved from existence by consciousness. To call “a tree fell” true, an observer hears vibrations (existence), defines them as sound, and judges the event’s certainty. Realists claim truth lives in the universe, saying “the sun is real” or “gravity is true.” But “sun” or “gravity” are defined things, needing a mind—“Intrinsic to what?” Consciousness can deal with partial truths and almost certainties. Claiming “existence is true” is a tautology; existence just is, undefined. Humans define the sun as a star, fusing atoms; crows, as a light, guiding paths. Both truths are real, yet partial, blind to existence’s undefined depths, like quantum waves. “Known by who?” Truth applies to things that a mind names, not existence’s splodge. Truth falters, too: geocentrism once reigned, toppled by heliocentrism’s evidence. This shows consciousness, purposeful and fluid, redefining truths as knowledge shifts. Anti-realism sees truth as subjective, sometimes shared through science’s agreed definitions, but never absolute. Existence’s undefined vastness limits all truths—no mind defines it all. Knowledge and truth, born from defining bits of existence, are consciousness’s craft, driven by purpose, as we’ll see next.

Purpose in the Generation of Knowledge

Why do we know? Purpose lights the spark. Whether chasing curiosity or surviving danger, purpose drives a mind to define existence’s grey splodge. Picture our tree’s fall: an observer, keen to understand, hears vibrations and defines them as “a tree fell,” forging knowledge and truth. Without purpose, existence stays undefined. Realists claim gravity’s pull is knowledge itself, but “Known by who?” Gravity is another  indistinguishable part of existence until a mind defines it as a force or as the curvature of spacetime. Saying “existence is real” is empty—existence doesn’t define things. Purpose shapes what we carve: humans define a forest to study its life; crows, a fall as danger to flee. Each knowledge, each reality, is a slice of existence, limited by the undefinable, like unseen molecules. A book holds data, but only a purposeful reader defines its words as knowledge. Crows sense light, but without human-like purpose, they don’t define it as a star. AI crunches numbers, lacking the self-aware drive to name things. Realist intrinsic reality crumbles—“Intrinsic to what?”—as existence needs a mind to become real. Purpose makes knowledge, truth, and reality conscious acts, defining the universe’s raw being, a theme echoed in how perspectives shape reality.

Perspectives on Reality: The Role of Perception

Is reality one, or many? It depends on the mind defining it. The sun burns in existence’s splodge, undefined. Humans, through science, give it a boundary, define it as a star, fusing hydrogen; crows, through instinct, see a light, guiding their flight. Each carves a reality—knowledge and truth—from existence, yet each misses the undefinable, like quantum flickers. Realists insist the sun is real in itself, but “Intrinsic to what?” The sun isn’t a “star” without a mind to first carve it out of existence and name it—“Known by who?” The sound of our tree’s fall is just air pressure vibrations until defined: by humans as a forest event, by crows as danger. These realities, though valid, are partial, shaped by perception’s lens and existence’s hidden depths. The universe holds the splodge of existence but defines no things; minds do that. Even science’s objectivity is minds agreeing on defined truths, not a truth beyond them. But a subjective untruth even if shared 8 billion times remains a subjective untruth. Realist claims of a real universe blur existence with reality, ignoring that things need defining. No perspective holds all—humans, crows, or others—because the undefinable bits of existence will always escape us. Some existence is unknowable. Reality is consciousness’s craft, a mosaic of defined things, not a universal slab. This anti-realist view, seeing reality as what we define, faces challenges we’ll tackle next.

Counterarguments: Where Does Knowledge Reside?

Could knowledge live outside a mind—in the universe, nature, books, or AI? Realists say yes, claiming gravity’s law is knowledge, real in itself. But gravity is existence’s hum, undefined until a mind calls it a force or spacetime—“Known by who?” Saying “existence is real” is a tautology, blurring brute fact with defined reality—“Intrinsic to what?” Descartes’ Cogito, ergo sum stumbles here, its loop (I exist, so I exist) assuming a self, like realism’s assumed reality, defining nothing. Trees grow, crows fly by light, but their “knowledge” is instinct, not defined belief. Crows sense the sun but don’t name it a star, lacking human purpose. Books store words, yet only a reader defines their meaning. AI processes data, programmed but not purposeful, outputting results, not knowledge. These claims mistake existence or information for knowledge, ignoring the mind’s role in defining things. Science’s truths, though shared, are minds defining existence, not existence defining itself. Our tree’s vibrations are existence’s pulse, undefined until an observer names them a sound or a fall. Realists conflate existence’s being with reality’s meaning, but only consciousness, purposefully carving things from the universe’s splodge, creates knowledge, truth, and reality, as we’ll reflect on next.

Implications and Reflections

What happens if knowledge, truth, and reality are consciousness’s creations? We must tread humbly. Truths shift—geocentrism gave way to heliocentrism—as minds redefine the bits and pieces of existence. Undefined existence, the unknowable, looms beyond, like quantum shadows, reminding us no truth is final. Realists’ intrinsic reality—“Intrinsic to what?”—ignores this, conflating existence’s splodge with defined things. Humans define ecosystems, crows dangers, each reality a fragment, urging care in the truths we craft. Descartes’ Cogito’s tautology, looping on existence, fades beside this view of reality as defined, not given. Anti-realism sparks curiosity, urging us to define the knowable while bowing to the undefinable. Science’s shared truths are precious, yet human, not universal. For non-specialists, this reveals knowledge as our act of naming existence—trees, stars, laws—not a cosmic gift. Philosophically, it dances with idealism and constructivism, spurning realism’s blend of existence and reality. Existence may hum unheard, but without a mind to define it, it is silent. This calls us to question, redefine, and own the realities we shape, as we’ll now conclude.

Conclusion

Our tree falls, vibrations pulsing in existence’s grey splodge. Is it real? Only if a mind defines it. Knowledge, truth, and reality are consciousness’s gifts, carved from the universe’s raw being. An observer names vibrations a forest event, crafting reality; crows sense danger, defining another. Realists call the universe real, blending existence with meaning—“Known by who?” Existence just is; things, however, need to be first imagined and then defined by a mind. Humans weave scientific truths, crows instinctual ones, each partial, constrained by undefinable existence. Purpose fuels this, setting conscious minds apart. Truths evolve—fallible, human—rejecting absolute reality. Saying “existence is real” or leaning on Descartes’ Cogito’s loop dodges the truth: only defined things are real or true. The universe holds existence, not things, until we name them. This anti-realist view demands the humility imposed by the unknowable—our truths are ours—and imposes responsibility, as defined realities shape our world. We can study and explore what we can define, and question what we cannot. Consciousness is our tool to extract meaning and comprehension from the grey cosmic background of existence and to assess the quality – truth, reality – of the knowledge we have created.


The Fallacy of Universalism / 2

April 16, 2025
This is the second in the essay series which began with

The Skeptical Case Against Natural Law / 1


 
The Fallacy of Universalism

The 20th century’s obsession with universalism – the notion that humanity can be bound by shared values, laws, or moral standards – was a profound misstep, rooted in shaky philosophical foundations and doomed by practical realities. From the Universal Declaration of Human Rights (UDHR) in 1948 to global institutions like the United Nations, World Trade Organization, and International Criminal Court (ICC), universalism promised a unified moral order to transcend cultural and national divides. Yet this pursuit was not just misguided; it was built on false premises that ignored the inherent diversity of humans and their societies. Far from fostering harmony, universalism sought to suppress the biological and social variety that ensures humanity’s resilience and vitality. Driven partly by European guilt after World War II and cloaked in virtue-signaling, it misunderstood human nature and curbed the freedoms it claimed to champion. This post argues that universalism lacks any coherent philosophical grounding – relying on fictions like Natural Law – and fails practically by imposing unworkable frameworks that stifle diversity’s strength. Societies thrive when free to forge their own values, provided they do no harm to others, rendering universalism both unnecessary and counterproductive.

Shaky Foundations

Universalism’s most glaring flaw is its lack of a sound philosophical basis. Proponents often invoke Natural Law – the idea that universal moral truths are inherent in human nature or discoverable through reason – as a cornerstone. This concept, tracing back to thinkers like Aquinas and Locke, assumes a shared essence that dictates right and wrong across all societies. Yet Natural Law is a fiction, a construct that crumbles under scrutiny. As argued in my earlier post, it presupposes a uniformity of human values that history and anthropology disprove. If moral truths were truly universal, why do societies differ so starkly on fundamental questions – life, justice, freedom? The Aztec practice of human sacrifice was as rational to them as modern human rights are to the West; both reflect context, not eternal truths. Natural Law’s claim to universality ignores that reason itself is shaped by culture, environment, and survival needs, yielding no singular moral code.

The contradiction is evident in universalism’s own failures. If values like “do not kill” were innate, as Natural Law suggests, atrocities like the Rwandan genocide or the Holocaust would not have mobilized thousands of perpetrators acting with conviction. That thousands of Islamic fundamentalists believe that killing infidels is the right and proper thing to do makes a mockery of ideas of universal morality. Universalist institutions like the ICC assert that crimes such as genocide “shock the conscience of humanity,” implying a shared moral compass. Yet the very occurrence of these acts – often justified as cultural or political imperatives – exposes the absence of such a compass. All the most heinous, inhuman acts in the world – as considered by some – are all committed by other humans who have quite different values. Values are not universal; they are contingent, forged in the crucible of specific societies. To claim otherwise is to project one’s own biases as truth, a philosophical sleight-of-hand that Natural Law enables but cannot sustain.

Other philosophical defenses of universalism fare no better. Kant’s categorical imperative – act only according to maxims you would have as universal law – assumes a rational consensus that doesn’t exist. Societies prioritize different ends: Japan values collective harmony, while the US exalts individual liberty. Neither can universalize its maxim without negating the other. Human rights, another universalist pillar, rest on the same shaky ground. The UDHR’s assertion of inalienable rights – life, equality – sounds noble but lacks grounding in any objective reality. Rights are not discovered; they are invented, reflecting the priorities of their creators (post-war Western elites). When Saudi Arabia or China rejects aspects of the UDHR, they’re not defying reason but asserting their own rational frameworks. Universalism’s philosophical poverty lies in its refusal to admit this pluralism, insisting instead on a unity that suppresses the diversity of human thought.

Over the past three centuries, universalism has masked control as moral duty. Colonial powers invoked civilization to plunder India and Africa, erasing diverse traditions under a universalist banner. The ICC’s African focus continues this, imposing Western justice while sparing Western crimes, proving universalism’s selectivity. Such interventions violate the principle of ‘do no harm,’ curbing societies’ freedom to differ unless they tangibly harm others.

This suppression is not just academic – it’s a curb on freedom. Diversity in values allows societies to experiment, adapt, and thrive in unique ways. Bhutan’s Gross National Happiness metric defies Western materialism yet fosters stability. Indigenous Australian kinship laws prioritize community over individualism, sustaining cultures for millennia. Forcing these societies to align with a universal standard – whether Natural Law or human rights – erases their agency, imposing conformity under the guise of morality. Philosophically, universalism fails because it denies the reality of human variation, mistaking difference for defect.

Why Universalism

The 20th-century love affair with universalism was more emotional than philosophical, driven by European guilt after World War II. The Holocaust, colonial atrocities, and global wars left Europe’s moral credibility in tatters. Once-proud imperial powers faced a reckoning, with their Enlightenment ideals exposed as hollow, by gas chambers, induced famines and bombed cities. The UDHR, drafted under UN auspices, was less a global consensus than a European attempt to reclaim moral ground. Its language – steeped in Western liberalism – framed rights as universal truths, ignoring dissenting voices from post-colonial or non-Western states. Ratification was pushed as necessary evidence of a country being part of the new civilised world order. Countries like India or Saudi Arabia ratified it with caveats, revealing the myth of unity. This virtue-signaling extended to institutions like the UN and ICC, which promised a new world order while sidestepping Europe’s complicity in creating the old one.

Universalism’s roots lie in ancient dreams of unity – Stoic cosmopolitanism, Christian salvation – but these were aspirational, not coercive. The Enlightenment and colonial eras turned universalism into a tool of control, with Natural Law as a flimsy excuse. But these fictions fail to bridge the diversity of human values.

This guilt-driven push was not about understanding humanity but about control by retaking the moral high ground. By proclaiming universal values, Europe (and later the West) sought to redefine the global moral landscape in its image. The ICC’s focus on African states – over 80% of its cases – while sparing Western actions in Iraq or Afghanistan, echoes colonial “civilizing” missions. Universalism became a tool to judge and intervene, not to unite. Its philosophical weakness – lacking a basis beyond Western dogma – made it ripe for such misuse, cloaking power in moral rhetoric.

Universalism is unworkable

Beyond its philosophical flaws, universalism fails practically by imposing frameworks that ignore the diversity of human societies. The complexity of aligning multiple nations under one standard grows exponentially with each participant, as vetoes and competing interests stall progress. The UN Security Council exemplifies this: a single veto from the US, France, the UK, Russia or China can paralyze action, as seen in Syria’s decade-long crisis. The WTO’s Doha Round, launched in 2001, remains deadlocked after 24 years, with 164 members unable to reconcile their priorities. The ICC’s record is equally dismal – 10 convictions in over two decades, none involving major powers like the US or India, who opt out entirely. These failures stem from a simple truth: the more diverse the players, the harder it is to find, let alone enforce, a universal rule.

Contrast this with bilateral agreements, which are exponentially simpler. A nation negotiates with one partner at a time, tailoring terms to mutual benefit without navigating a global gauntlet. Since the 1990s, bilateral trade deals have surged – over 300 globally by 2025 – while multilateral talks languish. The USMCA replaced NAFTA precisely because three nations could align faster than 34 under earlier pan-American proposals. Even security pacts, like India-Japan defense agreements, thrive on bilateral trust, not universal ideals. The math is clear: for “N” countries, managing “N-1” bilateral relationships is far less chaotic than wrestling with “N!” (N factorial) potential interactions in a multilateral arena. Like Rome’s Pax Romana, modern universalism falters when imposed, breeding resistance not unity. Bilateral cooperation, rooted in mutual respect, proves more viable

Universalism’s practical flaw is its denial of sovereignty. Societies function best when free to set their own rules, as long as they do no harm to others. Iceland’s secular egalitarianism and Saudi Arabia’s religious conservatism coexist peacefully because neither imposes its values across borders. When harm occurs—say, overfishing causing dwindling fish stocks, bilateral and/or multilateral cooperation among the parties involved can address it far better than by demanding ideological conformity. Universalist institutions, by contrast, breed resentment by judging internal practices. The UN’s human rights sanctions on Iran or the ICC’s warrants against African leaders provoke defiance, not compliance, as societies reject external moralizing.

The Strength of Difference

Individuals being different is humanity’s greatest asset, biologically and socially. Genetically, variation ensures survival (of the species though not of the unfit individual), allowing species adaptation to environmental shifts – a too narrow genetic spread would go extinct. Socially, this diversity manifests in the myriad ways societies organize themselves. The Maasai’s nomadic communalism sustains them in arid lands, while Singapore’s meritocratic discipline drives its prosperity. These systems, often at odds with universalist ideals, prove that cohesion requires no global standard. The “do no harm” principle respects this, allowing societies to be “unusual” so long as they avoid cross-border damage. When Japan’s whaling sparks debate, the issue is ecological impact, not moral offense. This approach fosters peace through mutual restraint, not forced unity.

Universalism’s attempt to erase the “we/them” dichotomy is both futile and destructive. Group identity – cultural, national – fuels cohesion and innovation. The “brotherhood of man” sounds noble but ignores that brotherhood privileges some over others. To eliminate “we/them” is to strip societies of their freedom to differ, demanding a homogeneity that negates diversity’s strength. The backlash – rising nationalism, skepticism of global bodies – reflects a reclaiming of this freedom.

Conclusion: Beyond Universalism

The 20th-century chase for universalism was a flawed response to a troubled era, rooted in European guilt and philosophical fiction. Natural Law and its offspring – human rights, global justice – lack grounding in the reality of human diversity. Practically, universalism’s complex frameworks collapse under the weight of competing sovereignties, while bilateral solutions prove nimbler and more respectful of difference. Societies thrive when free to forge their own paths, bound only by the duty to do no harm. Humanity’s strength lies not in sameness but in variation – genetic, cultural, ideological. By embracing this, we can foster a world of cooperation without conformity, where diversity, not universalism, ensures our resilience and freedom.

In order of difficulty in organising any field of activity, national is simpler than bilateral which is, in turn, simpler than multi-lateral and international –  in that sequence. It seems the world was bitten by the international bug during the 20th century, but has now realised it has gone too far and is now gingerly drawing back because international bodies have largely proven ineffective, bureaucratic, or politically manipulated.


The Great Mysteries: Known, Knowable, and Unknowable Foundations of Philosophy

April 16, 2025

The Great Mysteries: Known, Knowable, and Unknowable Foundations of Philosophy

Humanity’s pursuit of understanding is shaped by enduring questions – the Great Mysteries of existence, time, space, causality, life, consciousness, matter, energy, fields, infinity, purpose, nothingness, and free will. These enigmas, debated from ancient myths to modern laboratories, persist because of the inescapable limits of our cognition and perception. Our brains, with their finite 86 billion neurons, grapple with a universe of unfathomable complexity. Our senses – sight, hearing, touch – perceive only a sliver of reality, blind to ultraviolet light, infrasound, or phenomena beyond our evolutionary design. We cannot know what senses we lack, what dimensions or forces remain invisible to our biology. The universe, spanning an observable 93 billion light-years and 13.8 billion years, appears boundless, hiding truths beyond our reach. Together, these constraints – finite brain, limited senses, unknown missing senses, and an apparently boundless universe – render the unknowable a fundamental fact, not a mere obstacle but a cornerstone of philosophical inquiry.

Knowing itself is subjective, an attribute of consciousness, not a separate mystery. To know – the sky is blue, 2+2=4 – requires a conscious mind to perceive, interpret, and understand. How we know we know is contentious, as reflection on knowledge (am I certain?) loops back to consciousness’s mystery, fraught with doubt and debate. This ties knowing to the unknowable: if consciousness limits what and how we know, some truths remain beyond us. Philosophy’s task is to acknowledge this, setting initial and boundary conditions – assumptions – for endeavors like science or ethics. The unknowable is the philosophy of philosophy, preventing us from chasing mirages or clutching at straws. The mysteries intertwine – existence needs time’s flow, space grounds physical being, causality falters at its first cause, consciousness shapes knowing – luring us with connections that reveal little. We classify knowledge as known (grasped), knowable (graspable), and unknowable (ungraspable), rooted in consciousness’s limits. Ignoring this, philosophers and physicists pursue futile absolutes, misled by the mysteries’ web. This essay explores these enigmas, their links, and the necessity of grounding philosophy in the unknowable.

I. The Tripartite Classification of Knowledge

Knowledge, an expression of consciousness, divides into known, knowable, and unknowable, a framework that reveals the Great Mysteries’ nature. The known includes verified truths – facts like gravity’s pull or DNA’s structure – established through observation and reason. These are humanity’s achievements, from Euclid’s axioms to quantum theory. The knowable encompasses questions within potential reach, given new tools or paradigms. The origin of life or dark energy’s nature may yield to inquiry, though they challenge us now. The unknowable marks where our finite nature – biological, sensory, existential – sets impassable limits.

The unknowable stems from our constraints. Our brains struggle with infinite regress or absolute absence, bound by their finite capacity. Our senses capture visible light, not gamma rays; audible sound, not cosmic vibrations. We lack senses for extra dimensions or unseen forces, ignorant of what we miss. The universe, vast and expanding, hides realms beyond our cosmic horizon or before the Big Bang’s earliest moments (~10^-43 seconds). This reality – finite cognition, limited perception, unknown sensory gaps, boundless cosmos – makes it inevitable that some truths are inaccessible to us. We are embedded in time, space, and existence, unable to view them externally. Philosophy’s task is to recognize these limits, setting assumptions that ground endeavors. Ignoring the unknowable risks mirages – false promises of answers where none exist – leaving us clutching at straws instead of building knowledge.

II. The Great Mysteries: A Catalog of the Unknowable

The Great Mysteries resist resolution, their unknowability shaping the assumptions we must make. Below, I outline each, situating them in the tripartite framework, then explore their interconnected web, which lures yet confounds us.

Existence: Why Is There Something Rather Than Nothing?

Existence’s origin, from Leibniz to Heidegger, remains a foundational enigma. The known includes observable reality – stars, particles, laws – but why anything exists is unclear. Reason tells us that existence must be because it is compelled to be so, but what those compulsions might be defies our comprehension. There must have been some prior condition which made it “easier” for there to be existence than not. The knowable might include quantum fluctuations sparking the Big Bang, yet these assume causality and time. The unknowable is the ultimate “why,” demanding a perspective outside existence, impossible for us. Metaphysicians chasing a final cause risk mirages, assuming an answer lies within reach, when philosophy must set existence as an unprovable starting point.

Time: What Is Its True Nature?

Time governs not only life, but the existence of anything. Yet its essence eludes us. We observe some of its effects – clocks, seasons – and the knowable includes relativity’s spacetime or quantum time’s emergence. But is time linear, cyclic, or illusory? Its subjective “flow” defies capture. To know time, we’d need to transcend it, beyond temporal beings. Ancient eternal gods and block-time models falter, pursuing clarity where philosophy must assume time’s presence, not its essence. The unidirectional arrow of time just is. Brute fact which neither allows nor permits any further penetration.

Space: What Is Its Fundamental Reality?

Space, reality’s stage, seems familiar but confounds. We know its measures – distances, volumes – and the knowable includes curved spacetime or extra dimensions. But what space is – substance, relation, emergent – remains unknowable. Why three dimensions, enabling physical existence (stars, bodies), not two or four? We cannot exit space to see its nature, and Planck-scale probes (~10^-35 meters) elude us. Cosmologies from Aristotle to multiverses assume space’s knowability, risking straw-clutching when philosophy must posit space as a given.

Causality: Does Every Effect Have a Cause?

Causality drives science, yet its scope is unproven. We know cause-effect patterns – stones fall, reactions occur – and the knowable might clarify quantum indeterminacy. But is causality universal or constructed? The first cause – what sparked existence – remains sidestepped, with science starting a little after the Big Bang and philosophy offering untestable gods or regresses. To know causality’s reach, we’d need to observe all events, which is impossible. Thinkers like Hume assume its solvability, ignoring that philosophy must treat causality as an assumption, not a truth.

Life: What Sparks Its Emergence?

Life’s mechanisms – DNA, evolution – are known, and abiogenesis may be knowable via synthetic biology. We search for where the spark of life may have first struck but we don’t know what the spark consists of. Why matter becomes “alive,” or life’s purpose, is unknowable. And as long as we don’t know, those who wish to can speculate about souls. Animists saw spirits, biologists study chemistry, yet both chase a threshold beyond perception. Assuming life’s knowability risks mirages; philosophy grounds biology by positing life as an empirical phenomenon, not explaining its essence.

Consciousness: Why Do We Experience?

Consciousness, where knowing resides, is our core mystery. We know neural correlates; the knowable includes mapping them. But why processes yield experience – the hard problem – is unknowable, as consciousness cannot access others’ qualia or exit itself. How we know we know – certainty, doubt – is contentious, from Plato’s beliefs to Gettier’s challenges, tying knowing’s subjectivity to consciousness’s limits. Seeking universal theories risks mirages; philosophy assumes consciousness as given.

Matter, Energy, Fields: What Are They Fundamentally?

Matter, energy, and fields are known via models—atoms, quanta, waves. Every model uses initial and boundary conditions which, themselves, can not be addressed. The knowable includes quantum gravity. But their essence—what they are—may be unknowable. What is the stuff of the fundamental particles. Are fields real or fictions? Atomists to string theorists chase answers, but Planck-scale realities defy us. Assuming a final ontology risks mirages; philosophy sets these as frameworks, not truths.

Infinity: Can We Grasp the Boundless?

Infinity, the uncountable, defies intuition. It is a placeholder for the incomprehensible. We know mathematical infinities (Cantor’s sets) and use them; the knowable might clarify physical infinity (space’s extent). But infinity’s reality or role is unknowable—our finite minds falter at boundlessness, paradoxes (Zeno’s) persist. Mathematicians seeking proofs assume too much; philosophy posits infinity as a tool, not a fact.

Purpose: Does Existence Have Meaning?

Purpose shapes ethics and religion, yet is unproven. We know human meanings (values); the knowable might include evolutionary drives. But cosmic purpose – existence’s “for” – is unknowable, needing intent we cannot access. Existentialists and theologians project meaning, risking straws; philosophy assumes purpose as human, not universal. What compelled the Big Bang? or the existence of the universe? Was that some deeper Law of Nature? A Law of the Super-Nature?

Nothingness: What Is Absolute Absence?

Nothingness probes “nothing.” We know quantum vacuums fluctuate; the knowable might explore pre-Big Bang states. But true nothingness – absence of all – is unknowable, as we exist in “something.” To have something the framework of existence must be present and if then something is removed do we get to nothingness or are we left with the space of existence? With numbers we cannot derive zero except by subtracting one from one. But without something how do we even conceptualise nothing? Can nothingness only be defined by first having something? Parmenides and physicists assume answers, but philosophy must posit somethingness as our starting point.

Free Will: Are We Truly Free?

Free will grounds morality, yet is unclear. We know brain processes; the knowable includes mapping agency. But freedom versus determinism is unknowable – we cannot isolate uncaused acts or escape causality. Augustine to Dennett chase clarity, but philosophy assumes will as a practical condition, not a truth.

Perplexing Connections: A Web of Mirages

The mysteries intertwine, with knowing, as consciousness’s attribute, weaving through their links luring us toward insight yet leading nowhere. Existence and time are inseparable – being requires change which in turn needs time to flow. But what is the time and what does it flow through? Physical existence demands three-dimensional space – real things (quarks, trees) occupy it, unlike abstractions – yet why three dimensions, not two or four, baffles us. Causality binds these, an empirical fact – events follow causes – but the first cause, existence’s spark, is dodged, leaving a void.

  • Existence and Time: Existence implies dynamism; a timeless “something” feels unreal. Heraclitus tied being to flux, physics links time to entropy. But why existence exists loops to when it began, and time’s flow loops to existence’s cause. Our finite brains grasp sequences, not sources; senses see motion, not time’s essence; the boundless universe hides time’s start, if any. Philosophers like Kant (time as intuition) chase answers, but the link reveals only our limits, demanding we assume both as givens.
  • Space and Existence: Physical things require 3D space – a stone needs place, a star volume. Two dimensions lack depth for matter, four defy perception (a 4D “shadow” needs unimaginable light). Why 3D? Our embeddedness in space blocks an external view, senses miss other dimensions, and the cosmos may conceal alternatives. Descartes (space as extension) assumes knowability, but philosophy must posit 3D space as a condition, not explain it.
  • Causality’s Role: Causality stitches existence, time, space—events unfold in spacetime, caused by priors. Yet, the first cause – what began it? – is sidestepped. Science can only go back to a little after the Big Bang, philosophy offers gods or regresses, neither testable. Our observations halt at Planck scales, logic breaks at uncaused causes. Russell (“universe just is”) assumes closure, but causality’s origin remains an assumption, not a truth. Referring to a brute fact is the sure sign of having reached the unknowable.
  • Consciousness and Knowing: Knowing is consciousness’s act – perceiving, understanding, reflecting. How we know we know – certainty’s test – is debated, as consciousness doubts itself (Gettier, skeptics). This links all mysteries: existence’s why, time’s flow, space’s form depend on conscious knowing, subjective and limited, making their truths elusive.

These connections form a circular web – knowing needs consciousness, existence needs time, time needs space, space needs causality, causality needs existence – each leaning on others without a base we can reach. They tantalize, suggesting unity, but lead to mirages, as our finite minds cannot break the loop, our senses see only 3D, temporal projections, and the universe hides broader contexts. Ignoring this, thinkers pursue the web’s threads, clutching at straws when philosophy’s role is to set boundaries, not chase illusions.

III. The Futility of Overreaching

The Great Mysteries, interwoven, persist as unknowable, yet many refuse to see this. Philosophers debate existence or space’s nature, assuming logic captures them, blind to unprovable foundations. Neuroscientists claim consciousness will yield to scans, ignoring qualia’s gap. Physicists seek a Theory of Everything, presuming space, causality, matter have final forms, despite unreachable scales. The mysteries’ web fuels this folly—links like existence-time or causality-space suggest a solvable puzzle. But chasing these leads to mirages, as circularity traps us—time explains existence, space grounds causality, none stand alone.

This stems from assuming all is knowable. Science’s successes—vaccines, satellites—imply every question yields. Yet, the unknowable is philosophy’s guardrail. Without it, endeavors falter, like metaphysicians seeking existence’s cause or physicists probing causality’s origin, grasping at straws. Ancient skeptics like Pyrrho saw uncertainty’s value, grounding thought in limits, while modern thinkers often reject it, misled by the web’s false promise.

IV. Grounding Philosophy in the Unknowable

Acknowledging the unknowable is philosophy’s practical task, setting assumptions for science, ethics, art. It prevents chasing mirages, ensuring endeavors rest on firm ground:

  • Science: Assume space’s 3D frame, time’s flow, causality’s patterns, pursuing testable models (spacetime’s curve, life’s origin), not essences (space’s being, first causes).
  • Philosophy: Posit consciousness, free will as conditions for ethics, not truths to prove, avoiding loops to existence or causality.
  • Culture: Embrace mysteries in art, myth, as ancients did, using their web – time’s flow, space’s stage –  to inspire, not solve.

For example, DNA’s structure (known) and abiogenesis (knowable) advance biology, while life’s purpose is assumed, not chased. Space’s measures aid cosmology, its 3D necessity a starting point, not an answer.

V. Conclusion

The Great Mysteries – existence, time, space, causality, life, consciousness, matter, energy, fields, infinity, purpose, nothingness, free will – endure because our finite brains, limited senses, unknown missing senses, and boundless universe make the unknowable a fact. Their web – existence flowing with time, space enabling reality, causality faltering at its origin – lures but leads to mirages, circular and unresolvable. Ignoring this, philosophers and physicists chase straws, misled by false clarity. The unknowable is philosophy’s foundation, setting assumptions that ground endeavors. By embracing it, we avoid futile quests, building on the known and knowable while marveling at the mysteries’ depth, our place within their vast, unanswerable weave.


Related:

Knowledge is not finite and some of it is unknowable

https://www.forbes.com/sites/startswithabang/2016/01/17/physicists-must-accept-that-some-things-are-unknowable/#6d2c5834ae1a

https://ktwop.com/2018/08/21/when-the-waves-of-determinism-break-against-the-rocks-of-the-unknowable/

https://ktwop.com/2017/10/17/the-liar-paradox-can-be-resolved-by-the-unknowable/

Physics cannot deal with nothingness


The Skeptical Case Against Natural Law / 1

March 19, 2025

For many years I have struggled with finding the words to express my instinctive feelings against attempts to apply “universal” principles across all humans and which suppress human individuality. I have often tried  – usually without much success – to explain my dislike for concepts such as universal morality, Natural law, universal rights, unearned rights as entitlements and entitlements independent of behaviour. I am coming to the conclusion that my objections to, and dislike of, these concepts are essentially philosophical. Explanations of my objections need, I think, to be couched in philosophical terms.

I try to address these (again) in this series of essays.


Natural Law is often presented as a foundational principle governing human morality, law, and rights, claiming to be a universal standard of justice inherent in human nature. However, a closer examination reveals that Natural Law is not an empirical reality but a constructed ideological tool. It emerges only when different societies with distinct laws interact, and its purpose has historically been to justify the imposition of one society’s norms over another. The absence of empirical evidence for Natural Law, combined with its theological underpinnings and political motivations, renders it an unconvincing framework for understanding human morality and governance. Instead, morality is best understood as an emergent property of individual human values, varying across cultures, historical periods, and personal experiences. Here I try to explore the philosophical, historical, and empirical reasons why Natural Law fails as a legitimate concept and why morality must be recognized as subjective rather than universal.


The Absence of Empirical Evidence for Natural Law

If Natural Law were a genuine feature of human existence, we would expect to observe universal moral principles across all societies and cultures. However, anthropological and historical research reveals no such universality. While there are commonalities in human behavior – such as cooperation and conflict resolution – these vary significantly in their expression. For example, concepts of justice, property, and individual rights differ widely between societies. The idea that certain rights are inherent to all human beings is not supported by empirical observation but rather by ideological assertions.

Human history is filled with examples of societies that have organized themselves around vastly different moral and legal systems. From the caste-based hierarchy of ancient India to the communal property arrangements of indigenous tribes, moral codes are deeply context-dependent. Even within the same society, moral norms evolve over time, reflecting changes in economic conditions, technological advancements, and cultural shifts. This variability directly contradicts the claim that a singular, natural moral order governs human behavior.

The lack of empirical confirmation for Natural Law relegates it to the realm of metaphysical speculation. If Natural Law cannot be observed or tested, then it is indistinguishable from theological doctrine. It becomes a belief system rather than a demonstrable reality, making it no different from religious faith. This reliance on unprovable assertions undermines its credibility as a foundation for legal or moral theory.

Natural Law as a Tool of Domination

Natural Law does not emerge in isolated societies but only when different societies with conflicting rules interact. Historically, it has been invoked to justify the imposition of one society’s rules over another, often under the guise of a higher moral authority. Colonialism, religious expansion, and political domination have frequently relied on claims of Natural Law to legitimize conquest and control.

For instance, European colonial powers used the rhetoric of Natural Law to justify the subjugation of indigenous populations. They framed their legal and moral systems as “civilized” and based on universal principles, while dismissing native customs as inferior or unnatural. This ideological framework provided moral cover for coercion, exploitation, and cultural erasure. Of course religious institutions across the world have been quick to confer the halo of Natural Law on their own dogma. Religious institutions from have often used Natural Law arguments to enforce moral conformity, punishing deviations from dogmatic norms under the pretense of upholding their universal truths.

Natural Law’s historical role as an instrument of domination raises serious ethical concerns. If its primary function has been to serve the interests of those in power, then its legitimacy as a moral guide is highly suspect. Rather than being an impartial standard of justice, it appears to be a rhetorical device used to consolidate control over others.

The Fallacy of Universal Morality

The assumption that a universal morality exists contradicts the reality of human individuality. Every human being is genetically unique, behaves in distinct ways, and forms personal values based on their own experiences. Given this diversity, it is absurd to claim that a single moral code applies equally to all people. What is “good” for one person may be harmful or undesirable for another. What is “good” for me here and now is certain to be “bad” for some one of the other 8 billion people alive.

The idea of universal morality is, at best, an abstraction with no real-world grounding. At worst, it is an imbecilic construct used to justify coercion. The imposition of a supposedly universal moral order disregards the fact that morality is fundamentally a product of individual cognition. Each person’s moral framework emerges from their subjective values, which they use to navigate life’s complexities. The attempt to enforce a single moral standard on diverse populations is not only impractical but also a form of ideological tyranny.

Furthermore, moral codes are often shaped by historical circumstances rather than any intrinsic natural order. Concepts of justice, equality, and rights have changed dramatically over time, reflecting societal needs rather than adherence to some eternal truth. Slavery was once considered morally acceptable in many civilizations, and its eventual abolition was not the result of a discovery of Natural Law but of shifting economic and political forces. The same can be said for religious freedoms or freedom of expression and numerous other moral issues. This historical fluidity further undermines the idea that moral principles are fixed or inherent.

The Political Function of Universal Morality

If morality is not universal but instead emerges from subjective values, why does the myth of Natural Law persist? The answer lies in its political utility. The concept of a universal moral order provides a convenient justification for those in power to enforce their will on others. By claiming that certain moral rules are “self-evident” or “natural,” political and religious leaders can sidestep debate and impose their norms without question.

Universal morality is, in effect, a political construct. It serves as a tool for suppressing dissent and legitimizing authority. Governments, religious institutions, and international bodies all invoke the language of universal morality to assert control over populations. For example, international human rights laws claim to be based on fundamental moral principles, yet they often reflect the political interests of dominant nations. The selective enforcement of these laws—where powerful countries violate them with impunity while weaker nations face harsh penalties—reveals their true function as mechanisms of control rather than genuine moral imperatives.

By recognizing morality as inherently subjective, we expose the coercive nature of universal moral claims. A society that acknowledges the diversity of moral perspectives is better equipped to foster genuine dialogue and coexistence. Instead of imposing artificial moral absolutes, ethical and legal systems should be constructed with an understanding of human individuality and the necessity of negotiated social agreements.

Conclusion

Natural Law fails as a legitimate concept because it lacks empirical evidence, serves as a tool of domination, and falsely assumes a universal morality that does not exist. The historical and political record demonstrates that claims to Natural Law have been used primarily to justify coercion and control, rather than to uncover any genuine moral truth. Morality, rather than being an objective reality, emerges from individual values and experiences. Recognizing this subjectivity allows for a more honest and flexible approach to ethical and legal systems, one that respects human diversity rather than imposing ideological uniformity.

By rejecting Natural Law, we free ourselves from the illusion of universal morality and open the door to a more nuanced understanding of ethics—one that acknowledges the complexities of human existence rather than imposing rigid, arbitrary norms. The path to justice and social harmony lies not in fabricated moral absolutes but in the recognition of individual agency and the negotiated agreements that allow diverse societies to coexist.

Natural Law is, in fact, nothing more than a political invention for use as a tool for oppression.


Science ultimately needs magic to build upon

January 3, 2025

The purpose of the scientific method is to generate knowledge. “Science” describes the application of the method and the knowledge gained. The knowledge generated is always subjective and the process builds upon fundamental assumptions which make up the boundary conditions for the scientific method. These  assumptions can neither be explained or proved.


I find it useful to take knowledge as coming in 3 parts.

  1. known: This encompasses everything that we currently understand and can explain through observation, experimentation, and established theories. This is the realm of established scientific knowledge, historical facts, and everyday experiences.
  2. unknown but knowable: This is the domain of scientific inquiry. It includes phenomena that we don’t currently understand but that we believe can be investigated and explained through scientific methods. This is where scientific research operates, pushing the boundaries of our knowledge through observation, experimentation, and the development of new theories.
  3. unknown and unknowable: This is the realm that I associate with metaphysics, religion and theology. It encompasses questions about ultimate origins, the meaning of existence, the nature of consciousness, and other metaphysical questions that may not be amenable to scientific investigation.

Philosophy then plays the crucial role of exploring the boundaries between these domains, challenging the assumptions, and developing new ways of thinking about knowledge and reality.

I like this categorization of knowledge because

  • it provides a clear framework for distinguishing between different types of questions and approaches to understanding.
  • it acknowledges the limits of scientific inquiry and recognizes that there may be questions that science cannot answer, and
  • it allows for the coexistence of science, philosophy, religion, and other ways of knowing, each addressing different types of questions.

To claim any knowledge about the unknown or the unknowable leads inevitably to self-contradiction. Which is why the often used form “I don’t know what, but I know it isn’t that” is always self contradictory. It implies a constraint on the unknown, which is a contradiction in terms. If something is truly unknown, we surely cannot even say what it is not.

Given that the human brain is finite and that we cannot observe any bounds to our universe – in space or in time – it follows that there must be areas beyond the comprehension of human cognition. We invent labels to represent the “unknowable” (boundless, endless, infinite, timeless, supernatural, magic, countless, ….). These labels are attempts to conceptualize what is inherently beyond our conceptualization. They serve as placeholders for our lack of understanding. But it is the human condition that having confirmed that there are things we cannot know, we then proceed anyway to try and define what we cannot. We are pattern-seeking beings who strive to make sense of the world around us. Even when faced with the limits of our understanding, we try to create mental models, however inadequate they may be.

Human cognitive capability is limited not only by the brain’s physical size but also by the senses available to us. We know about some of the senses we lack (e.g., the ability to detect magnetic fields like some birds or to perceive ultraviolet light directly like some insects), but cannot know what we don’t know. We cannot even conceive of what other senses we might be missing. These are the “unknown unknowns,” and they represent a fundamental limit to our understanding of reality. Even our use of instruments to detect parameters we cannot sense directly must be interpreted by the senses we do have. We convert X-rays into images in the visible spectrum, or we represent radio waves as audible sounds. This conversion necessarily involves interpretation and introduces subjectivity. We also know that the signals generated by an animal’s eye probably cannot be understood by a human brain. The brain’s software needs to be tuned for the senses the brain has access to. The inherent limitations of human perception makes the subjective nature of our experience of reality unavoidable. The objectivity of all human observations is thus a mirage. Empiricism is necessarily subjective.

Scientific inquiry remains the most powerful tool humans have developed for understanding the world around us. With sophisticated instruments to extend our limited senses and by using conceptual tools such as mathematics and logic and reason we gain insights into aspects of reality that would otherwise be inaccessible to us. Never mind that logic and reason are not understood in themselves. But our experience of reality is always filtered through the lens of our limited and species-specific senses. We cannot therefore eliminate the inherent subjectivity of our observations and the limitations of our understanding. We cannot know what we cannot know.

I do not need to invoke gods when I say that “magic” exists, when I define “magic” as those things beyond human comprehension. This definition avoids superpower connotations and focuses on the limits of our current knowledge. In this sense, “magic” is a placeholder for the unknown. I observe that the process of science requires fundamental assumptions which are the boundary conditions within which science functions. These assumptions include:

  • Existence of an External Reality: Science assumes that there is an objective reality independent of our minds.
  • Existence of Matter, Energy, Space, and Time: These are the fundamental constituents of the physical universe as we understand it.
  • Causality: Science assumes that events have causes and that these causes can be investigated.
  • Uniformity of Natural Laws: Science assumes that the laws of nature are the same everywhere in the universe and throughout time.
  • The possibility of Observation and Measurement: Science depends on the assumption that we can observe and measure aspects of reality.
  • The biological and medical sciences observe and accept but cannot explain life and consciousness.

Science operates within a framework given by these fundamental assumptions which cannot be  explained. These incomprehensibilities are the “magic” that science builds upon. Science can address them obliquely but cannot question them directly without creating contradictions. If we were to question the existence of an external reality, for example, the entire scientific enterprise would become meaningless. Science can investigate their consequences and refine our understanding of what they are not, but cannot directly prove or disprove them. These assumptions are – at least currently – beyond human comprehension and explanation. Science builds upon this “magic” but cannot explain the “magic”.

Magic is often ridiculed because it is perceived as invoking beings with supernatural powers which in turn is taken to mean the intentional violation of some of the laws of nature. The core issue lies in the definitions of “magic” and  “supernatural.” I take supernatural to be “that which is beyond the laws of nature as we know them.” But we tend to dismiss the supernatural rather too glibly. If something is beyond comprehension it must mean that we cannot bring that event/happening to be within the laws of nature as we know them. And that must then allow the possibility of being due to the “supernatural”. If we do not know what compels existence or causality then we cannot either exclude a supernatural cause (outside the laws of nature as we know them). In fact the Big Bang theory and even quantum probabilities each need such “outside the laws of nature” elements. A black hole is supernatural. Singularities in black holes and the Big Bang represent points where our current understanding of physics breaks down. The laws of general relativity, which describe gravity, become undefined at singularities. In this sense, they are “beyond” our current laws of nature. A singularity where the laws of nature do not apply is “supernatural”. Dark energy and dark matter are essentially fudge factors and lie outside the laws of nature as we know them. We infer their existence from their gravitational effects on visible matter and the expansion of the universe, but we haven’t directly detected them. Collapsing quantum wave functions which function outside space and time are just as fantastical as Superman. All these represent holes in our understanding of the universe’s composition and dynamics. That understanding may or may not come in the future. And thus, in the now, they are supernatural.

Supernatural today may not be supernatural tomorrow. It is the old story of my technology is magic to someone else. Magic is always beyond the laws of nature as we know them. But what is magic today may remain magic tomorrow. We cannot set qualifications on what we do not know. What we do not know may or may not violate the known laws of nature. While we have a very successful theory of gravity (general relativity) that accurately predicts the motion of planets, we don’t fully understand the fundamental nature of gravity. We don’t know how it is mediated. In this sense, there is still an element of “magic” or mystery surrounding gravity. We can describe how it works, but not ultimately why. The bottom line is that we still do not know why the earth orbits the sun. We cannot guarantee that everything currently unexplained will eventually be explained by science. There might be phenomena that remain permanently beyond our comprehension, or there might be aspects of reality that are fundamentally inaccessible to scientific investigation. By definition, we cannot fully understand or categorize what we do not know. Trying to impose strict boundaries on the unknown is inherently problematic. We cannot assume that everything we currently don’t understand will necessarily conform to the laws of nature as we currently understand them. New discoveries might require us to revise or even abandon some of our current laws.

The pursuit of scientific knowledge is a journey into the unknown, and we will encounter phenomena that challenge our existing understanding. But we cannot question the foundational assumptions of science without invalidating the inquiry.

Science depends upon – and builds upon – magic.