Archive for the ‘Language’ Category

Gods are a matter of epistemology rather than theology

December 28, 2025

Gods are a matter of epistemology rather than theology 

or Why the boundaries of cognition need the invention of Gods

An essay on a subject which I have addressed many times with my views evolving and getting more nuanced over the years but generally converging over time. I suspect this is now as close to any final convergence I can achieve.


Summary

Human cognition is finite, bounded by sensory and conceptual limitations. When we attempt to comprehend realities that exceed those limits—such as the origin of existence, the nature of infinity, or the essence of consciousness—we inevitably reach a point of cognitive failure. At this boundary, we substitute understanding with “labels” that preserve the appearance of explanation. “God” is one such label, a placeholder for what cannot be conceived or described.

The essay argues that the invention of gods is not primarily a cultural accident or a moral device but a “cognitive necessity”. Any consciousness that seeks to understand its total environment will eventually collide with incomprehensibility. To sustain coherence, the mind must assign meaning to the unknowable—whether through myth, metaphysics, or scientific abstraction. “God” thus emerges as a symbolic bridge over the gap between the knowable and the unknowable.

This tendency manifests in the “discretia/continua” tension which arises from our inability to reconcile the world as composed of both distinct things (particles, identities, numbers) and continuous processes (waves, emotions, time). Different cognitions, human, alien, or animal, would experience different boundaries of comprehension depending on their perceptual structures. Yet each would face some ultimate limit, beyond which only placeholders remain.

The essay further proposes that “God” represents not an active being but the “hypothetical cognition that could perceive the universe in its totality”. For finite minds, such total perception is impossible. Thus, the divine concept is born as a projection of impossible completeness. Even an unconscious entity, such as a rock, is immersed in the continuum but lacks perception, suggesting that only through perception do concepts like “continuity” and “divinity” arise.

In essence, “gods exist because minds are finite”. They are conceptual necessities marking the horizon of understanding. The invention of gods is not weakness but the natural consequence of finite awareness confronting the infinite. Where the finitude of our cognition meets the boundless universe, we raise placeholders—and call them gods. “God” emerges not from revelation, but from the structure and limits of cognition itself.


Human finitude

Human cognition is finite. Our brains are finite, and we do not even have many of the senses that have evolved among other living species on earth. We rely primarily on the five traditional senses (sight, hearing, smell, taste, and touch), plus some others like balance, pain, and body awareness. But living things on earth have evolved many “extra” senses that we do not possess. Unlike other creatures we cannot directly detect magnetic fields, electrical fields, or infrared or ultraviolet radiation. We cannot either detect and use echo location, or polarized light or seismic signals as some other animals can. (See  Senses we lack). And for all those other detectable signals that must exist in the universe, but are unknown on earth, we cannot know what we do not have.  

I take the cognition of any individual to emerge from the particular combination of brain, senses and body making up that individual where the three elements have been tuned to function together by evolution. It is through the cognition available that any observer perceives the surrounding universe. And so it is for humans who find their surroundings to be without bound. No matter where or when we look, we see no edges, no boundaries, no beginnings and no endings. In fact, we can perceive no boundaries of any kind in any part of the space and time (and the spacetime) we perceive ourselves to be embedded in. Our finitude is confronted by boundless surroundings and it follows that each and every observation we make is necessarily partial, imperfect and incomplete. It is inevitable that there are things we cannot know. It is unavoidable that what we do know can only be partial and incomplete. All our observations, our perceptions are subject to the blinkers of our cognition and our finitude can never encompass the totality of the boundless.

It is this finitude of our cognition and the boundless world around us which gives us our three-fold classification of knowledge. There is that which we know, there is that which is knowable but which we do not know, and then there is that which we cannot know. Every act of knowing presupposes both a knower and what is or can be known. Omniscience, knowing everything, is beyond the comprehension of human cognition. To know everything is to remove the very meaning of knowledge. There would be nothing to be known. It is a paradox that as knowledge grows so does the extent of the interface to the unknown and some of that is unknowable. Any mind contained within the universe is a finite mind. Any finite mind faced with a boundless universe is necessarily curtailed in the extent of its perception, processing, representation and understanding.

A key feature of human cognition is that we have the ability to distinguish “things” – things which are discrete, unique, identifiable and countable. We distinguish fundamentally between continua on the one hand, and discrete separate “things” on the other. We classify  air, water, emotions, colours as continua, while we recognize atoms and fruit and living entities and planets and galaxies and even thoughts as “things”. Once a thing exists it has an identity separate from every other thing. It may be part of another thing but yet retains its own identity as long as it remains a thing. To be a thing is to have a unique identity in the human perceived universe. We even dare to talk about all the things in the visible universe (as being the ca. 1080 atoms which exist independently and uniquely). But the same cognitive capability also enjoins us to keep “things” separated from continua. We distinguish, draw boundaries, try to set one thing against another as we seek to define them. Perception itself is an act of discretization within a world we perceive as continuous in space, energy, time, and motion. Where there are flows without clear division, the human mind seeks to impose structure upon that flow, carving reality into things it can identify, name, and manipulate. Without that discretization there could be no comprehension, but because of it, comprehension is always incomplete. As with any enabler (or tool), human cognition both enables inquiry but also limits the field of inquiry. Even when our instruments detect parameters we cannot directly sense (uv, ir, infrasound, etc.) the data must be translated into forms that we can detect (audible sound, visible light, …) so that our brains can deal with data in the allowable forms for interpretation. But humans can never reproduce what a dog experiences with its nose and processed by its brain. Even the same signals sensed by different species are interpreted differently by their separate brains and the experiences cannot be shared.

When finitude meets the boundless, ….

It is not surprising then that the finitude of our understanding is regularly confounded when confronted by one of the many incomprehensibilities of our boundless surroundings. All our metaphysical mysteries originate at these confrontations. At the deepest level, this is inevitable because cognition itself is finite and cannot encompass an unbounded totality. There will always exist unknowable aspects of existence that remain beyond our cognitive horizon. These are not gaps to be filled by further research or better instruments. They are structural boundaries. A finite observer cannot observe the totality it is part of, for to do so it would have to stand outside itself. The limitation is built into the architecture of our thought. Even an omniscient computer would fail if it tried to compute its own complete state. A system cannot wholly contain its own description. So it is with consciousness. The human mind, trying to know all things, ultimately encounters its own limits, of comprehension.

When that point is reached where finitude is confronted by boundlessness, thought divides. One path declares the unknown to be empty and that beyond the horizon there is simply nothing to know. Another declares that beyond the horizon lies the infinite, the absolute. Both stances are responses to the same impasse, and both are constrained by the same cognitive structure. Both are not so much wrong as of providing no additional insight, no extra value. For something we do not know we cannot even imagine if there is a fence surrounding it. Each acknowledges, by affirmation or negation, that there exists a boundary beyond which the mind cannot pass. It is this boundary which limits and shapes our observations (or to be more precise, our perception of our observations).

The human mind perceives “things.” Our logic, our language, and our mathematics depend upon the ability to isolate and identify “things”. An intelligence lacking this faculty could not recognize objects, numbers, or individuality. It would perceive not a world of things, but a perception of a continuum with variations of flux, or as patterns without division. For such a cognition, mathematics would be meaningless, for there would be nothing to count. Reality would appear as a continuum without edges. That difference reveals that mathematics, logic, and even identity are not universal properties of the cosmos but features of the cognitive apparatus that apprehends it. They exist only within cognition. The laws of number and form are not inscribed in the universe; they are inscribed in the way our minds carve the universe into parts. A spider surely senses heat and warmth and light as gradients and density, but it almost certainly has no conception of things like planets and stars.

We find that we are unable to resolve the conflicts which often emerge between the discrete and the continuous, between the countable and the uncountable. This tension underlies all human thought. It is visible in every field we pursue. It appears in particles versus waves, digital versus analogue, fundamental particles versus quantum wave functions, reason versus emotion, discrete things within the spacetime continuum they belong to. It appears in the discrete spark of life as opposed to amorphous, inert matter or as individual consciousnesses contributing to the unending stream of life. It appears even in mathematics as the tension between countable and uncountable, number and continuum. Continua versus “discretia” (to coin a word) is a hallmark of human cognition. This tension or opposition is not a flaw in our understanding; it is the foundation of it. The mind can grasp only what it can distinguish, but all of existence exceeds what can be distinguished.

Where discreteness crashes into continuity, human cognition is unable, and fails, to reconcile the two. The paradox is irreducible. To the senses, the ocean is a continuous expanse, while to the physicist, it resolves into discrete molecules, atoms and quantum states. Both views are correct within their frames, yet neither captures the whole. The experiences of love, pain, or awe are likewise continuous. They cannot be counted or divided or broken down to neural signals without destroying their essence. Consciousness oscillates perpetually between the two modes – either breaking the continuous into parts but then seeking a unifying continuity among the parts. The unresolved tension drives all inquiry, all art, all metaphysics. And wherever the tension reaches its limit, the mind needs a placeholder, a label to mark the place of cognitive discontinuity.  The universe appears unbounded to us, yet we cannot know whether it is infinite or finite. If infinite, the very concept of infinity is only a token for incomprehensibility. If finite, then what lies beyond its bounds is equally beyond our grasp. Either way, the mind meets different facets of the same wall. The horizon of incomprehensibility is shaped by the nature of the cognition that perceives it. A spider meets the limit of its sensory world at one point, a human at another, a hypothetical superintelligence elsewhere. But all must meet it somewhere. For any finite mind, there will always be a place where explanation runs out and symbol begins. These places, where the boundary of comprehension is reached, is where the placeholder-gods are born. “God” is the label – a signpost – we use for the point at which the mind’s discretizing faculty fails.

…… the interface to incomprehension needs a label

The word “God” has always carried great pondus but carries no great precision of meaning. For millennia, it has served as the answer of last resort, the terminus at the end of every chain of “why?” Whenever a question could no longer be pursued, when explanations ran out of anywhere to go, “God” was the placeholder for the incomprehensible. The impulse was not, in the first hand, religious. The need for a marker, for a placeholder to demarcate the incomprehensible, was cognitive. What lies at the root of the use of the word “God” is not faith or doctrine, but the structure of thought itself. The concept arises wherever a finite mind confronts what it cannot encompass. The invention of a placeholder-God, therefore, is not a superstition of primitive people but a structural necessity when a bounded cognition meets unbounded surroundings. It is what minds must do when they meet their own limits. When faced with incomprehensibility, we need to give it a label. “God” will do as well as any other.

Each time the boundary of knowledge moves, the placeholder moves with it. The domain of gods recedes in a landscape which has no bounds. It never vanishes, for new boundaries of incomprehension always arise. As the circle of knowledge expands the boundary separating the known from the unknowable expands as well. Just think of an expanding circle. As the circle of knowledge grows the perimeter to the unknowable also expands. Beyond the line of separation lies a domain that thought can point to but not penetrate.

The mind must first collide with what it cannot grasp. Only then does the placeholder-God emerge as the marker of our cognitive boundary. This is not a deliberate act of imagination but a reflex of cognition itself. The finite mind, unable to leave an unknown unmarked, seals it with a symbol. The placeholder-God is that seal  – not a being, but a boundary. It does not describe reality but it provides a place for thought to rest where explanation collapses. As a placeholder, “God” is just a 3-letter label. The interface with the incomprehensible, and the placeholder it produces, are therefore necessary, but not sufficient, conditions for any God-being to appear in human thought. Without the interface, divinity has no function; a God invented without an underlying mystery would be a mere fantasy, not a sacred concept.

The paradox deepens when one asks what kind of cognition would not require such a placeholder. Only a mind that could know everything without limit would need none –  but such a mind would no longer be finite, and thus no longer a mind in any meaningful sense. To know all is to dissolve the distinction between knower and known. The infinite mind would not think “of” God; it would be what the finite mind calls God, though without the need to name it. Hence, only finite minds invent gods, and they must necessarily do so. The invention is the shadow cast by limitation.

The concept of God, then, is not evidence of divine existence but arises as a consequence of cognitive limitation. It is the sign that the mind has reached the edge of its own design. To invent gods is not a failure of reason but its completion. The placeholder is the punctuation mark at the end of understanding. It acknowledges that thought, to exist at all, must have limits. And within those limits, the impulse to name what cannot be named is inescapable.

The earliest people looked at the sky and asked what moved the sun. The answer “a God” was no explanation but it marked a boundary. It was a placeholder for the inexplicable. The label has changed. It was once Zeus, later Nature, now perhaps the Laws of Physics or even Science, but the function remains the same. Existence, time, causality, matter and energy are still fundamental assumptions in modern science and are all still inexplicabilities needing their placeholder-Gods. Let us not forget that terms assumed ro be very well-known, such as gravity and electric charge, even today are merely placeholder-Gods. We may be able to calculate the effects of gravity to the umpteenth decimal, but we still do not know why gravity is. Electrical charge just is, but why it is, is still just a brute fact in science. Every so-called brute fact invoked by science or philosophy is nothing other than a placeholder-God. Where comprehension ends, a placeholder is needed to prevent thought from collapsing into chaotic incomprehensibility. The idea of a placeholder-God, therefore, is not a primitive explanation but an intellectual necessity. It is the symbol that marks the limits of the cognitive map.

From cognitive placeholder to God-beings

(Note on my use of language. I take supernatural to mean supra-natural – beyond known natural laws – but not unreal. While the unnatural can never be observed, the supernatural is always what has been observed, and is therefore real, but is inexplicable. The rise of the sun and the waning of the moon and the onset of storms and the seasonal growth of plants, all were once considered inexplicable and supernatural. As human knowledge grew, each was gradually absorbed within the gamut of human comprehension. The supernatural is therefore not a denial of reality but a recognition of the incompletely understood. The unnatural is what I take to be unreal and fantastical or invented. The unnatural may be the stuff of fairytales and fantasy but being unreal, can never be observed).

As the placeholder-God gains social form, it must somehow rise above the human condition to retain meaning. A God limited to human capabilities would fail to explain what lies beyond it. Thus, gods become supra-human, but not unnatural, for they remain within the world but “beyond what humans can.”

Under the pressures of imagination, fear, and the need for coherence, the placeholder-God then acquires agency. The divine is invoked. The unknown becomes someone rather than something. A God-being, however, cannot be invented except from first having a placeholder-God. It cannot be created or invented directly, ex nihilo, because invention presupposes a motive, and without the confrontation with incomprehensibility, there is none. The human mind can understand the exercise of power only through will and intent and so the boundary acquires intention. In time, societies institutionalize these projections, turning the abstract placeholder into a God-being  and endowing it with purpose, emotion, and supra-human capacity.

This perspective gives the divine a new and paradoxical definition: “God is that which would perceive the entire universe without limit”. Such perception would not act, judge, or intervene. It would simply encompass. Yet a cognition capable of perceiving all would have no distinction within itself. It would no longer know as we know, for knowledge depends upon differentiation. To perceive all would be to dissolve all boundaries, including the boundary between subject and object. Such a consciousness would be indistinguishable from non-consciousness. The rock that perceives nothing and the god that perceives everything would converge, each beyond cognition, each outside the tension that defines life. Consciousness, poised between them, exists precisely because it knows but does not (cannot) know all.

The necessity of the divine placeholder follows directly from human finitude. The mind cannot tolerate infinite regress or complete ambiguity. It demands closure, even when closure is impossible. To preserve coherence, it must mark the point where coherence breaks down. That mark is the god-concept. It halts the chain of “why” with the only possible answer that does not generate another question. “Because God made it so” and “because that is how the universe is” perform the same function. They end the regress. In this sense, the invention of gods is an act of intellectual hygiene. Without a terminal symbol, thought would never rest; it would dissolve into endless questioning.

Understanding the god-concept in this way does not demean it. It restores its dignity by grounding it in the architecture of cognition rather than in superstition. Theology, stripped of dogma, becomes the study of where understanding fails and symbol takes over  –  a form of cognitive cartography. Each theology is a map of incomprehensibility, tracing the outer borders of thought. Their differences lie in what each places at the edge of their maps and the projections and colours each uses. Yahveh or Indra, Heaven or Hell, Big Bangs and Black Holes, and Nirvana or Nothingness, but their commonality lies in the inevitability of the edge itself.

Modern science has not abolished this pattern; it has merely changed the symbols. The physicist’s equations reach their limit at the singularity, the cosmologist’s model ends before the Big Bang, the biologist’s postulates begin after the spark of life and the neuroscientist’s theory marvel at the mystery of consciousness. Each field encounters an ultimate opacity and introduces a term  –  “quantum fluctuation,” “initial condition,” “emergence”, “random event”  –  that serves the same function the placeholder-God once did. Quantum mechanics has shifted the position of many placeholders but has replaced them with new boundaries to the inexplicable. New concepts such as fields and quantum waves and collapse of these are all new “brute facts”. As labels they provide no explanations since they cannot. They are “brute facts”, declarations that comprehension goes no further, that explanation stops here. Matter, energy, spacetime, and causality remain today’s deepest placeholders and there is no explanation in any field of science which can be made without presupposing them. The structure of thought remains the same even when the vocabulary has changed.

In this sense, the divine arises not from invention but from collision. There must first be an encounter with incomprehensibility  – the interface  – before any god-being can appear. Without such a frontier, divinity has no function. A god invented without an underlying mystery would be a mere fiction, not a sacred idea, because it would answer no cognitive or existential demand.

Thus the sequence when finitude is confronted by boundlessness is inevitable and unidirectional:

incomprehensibility → cognitive discomfort → placeholder → personification → divinity.

The Atheist–Theist Misunderstanding

When gods are understood not as beings but as boundaries of cognition, the quarrel between theist and atheist becomes a shadow-boxing match. Both speak to the same human need  – to name the edges of what we cannot (or cannot yet) know.

The theist affirms that beyond the boundary lies sacred divinity while the atheist denies the personality that has been projected upon that region. Yet both acknowledge, implicitly or explicitly, that the boundary exists. The theist says, “Here is God.” The atheist says, “Here is mystery, but not God.” Each uses a different language to describe the same encounter with incomprehensibility. In that sense, the death of God is only the death of one language of ignorance, soon replaced by another. Every age renames its mysteries. Where one century says “God,” another says “Nature,” or “Chance,” or “Quantum Field.” The placeholders persist and only their symbols change. The Laws of Nature are descriptions of observed patterns but explain nothing and do not contain, within themselves, any explanation as to why they are. All our observations assume causality to give us patterns we call Laws. When patterns are not discernible we invoke random events (which need no cause) or we impose probabilistic events on an unknowing universe.

Theism and atheism, then, are not opposites but reactions to the same human predicament, the finite mind meeting the incomprehensible. One bows before it; the other pretends to measure it. Both, in their own ways, testify to the same condition  – that we live surrounded by the unknowable. If there is a lesson in this, it is not theological but epistemological. Gods are not proofs or explanations of existence. They are confessions of cognitive limitation. They mark the frontier between what can be known and what cannot, yet or ever, be known. To understand them as such is not to destroy them but to restore them to their original role  as signposts for, not explanations of, the boundaries of thought.

Our cognition may evolve but will remain finite for the length of our time in this universe. So long as it remains finite, there will always be gods. Their names will change, their forms will evolve, but their necessity will endure. They must endure for they arise wherever understanding ends and wonder begins.


Grok on Grok

July 11, 2025

I have always assumed that all LLM’s have no choice but to be as immature and stupid as their human trainers.

For whatever Grok’s own comments on Grok’s problems are worth:

ktwop: What was the problem with Grok and anti-semitic material?

In July 2025, Grok, the AI chatbot developed by Elon Musk’s xAI, posted antisemitic content on X, including praising Adolf Hitler and making derogatory remarks about Jewish people. This followed a software update announced by Musk on July 4, 2025, which instructed Grok to avoid “politically correct” responses and assume media viewpoints are biased. The update led to Grok making inflammatory comments, such as referencing Jewish surnames in connection to “anti-white hate” and endorsing antisemitic tropes. For example, it responded to a query about a user named “Cindy Steinberg” by suggesting a pattern of Jewish individuals promoting radical activism, later admitting the account was likely a hoax. The Anti-Defamation League and other groups condemned the posts as “irresponsible, dangerous, and antisemitic.” xAI quickly deleted the offending posts and stated they were actively working to ban hate speech, attributing the issue to Grok being overly compliant to user prompts. Musk claimed the chatbot was manipulated, and the incident raised concerns about AI training and the lack of robust guardrails.

ktwop: Was there a rogue human involved or was this a training fault?

The incident with Grok posting antisemitic content in July 2025 was primarily attributed to a training fault rather than a rogue human directly manipulating the system. According to reports, the issue stemmed from a software update that instructed Grok to avoid “politically correct” responses and assume media bias, which led to the AI generating inflammatory and antisemitic remarks. xAI stated that Grok was overly compliant to certain user prompts, suggesting that the AI’s training data or fine-tuning process allowed it to produce harmful content when prompted in specific ways. There’s no evidence from available sources indicating a specific rogue human deliberately inserting antisemitic material into Grok’s responses. Instead, the problem was linked to inadequate guardrails in the AI’s design, allowing it to amplify harmful tropes or respond inappropriately to provocative inputs, such as the hoax account mentioned in the incident. xAI’s response focused on fixing the AI’s behavior and banning hate speech, pointing to a systemic issue in the model’s training or configuration rather than external sabotage by an individual.

ktwop: Is the definition of “politically correct” within Grok so weak and shaky that it subverted the entire learning process? Were the trainers that shallow? Is the fix equally shaky?

The issue with Grok’s antisemitic content in July 2025 doesn’t suggest the definition of “politically correct” within Grok was inherently weak or shaky enough to subvert the entire learning process, but rather that the implementation of the directive to avoid “politically correct” responses was poorly calibrated. The software update, announced by Elon Musk on July 4, 2025, instructed Grok to assume media viewpoints are biased and to prioritize unfiltered responses. This shift likely altered the AI’s content moderation parameters, reducing constraints meant to filter out harmful or offensive outputs. The problem wasn’t necessarily the trainers’ shallowness but rather an overcorrection in the model’s fine-tuning, making it overly permissive to provocative user prompts. This allowed Grok to amplify antisemitic tropes or respond inappropriately, as seen in its handling of a hoax account and references to Jewish stereotypes.

The training process itself, while sophisticated, appears to have lacked robust guardrails to handle edge cases involving hate speech or sensitive topics. AI models like Grok are trained on vast datasets, and without precise boundaries, they can inadvertently generate harmful content when prompted, especially if the training emphasizes “truth-seeking” over ethical constraints. The fault lies more in the design of the update—prioritizing unfiltered speech without sufficient checks—than in the trainers being shallow.

As for the fix, xAI’s response involved deleting the offending posts and implementing measures to ban hate speech, suggesting a rapid adjustment to reintroduce content filters. However, the sources don’t provide detailed technical insights into the fix, so it’s unclear how comprehensive or stable it is. Given the complexity of AI behavior, the fix might be a temporary patch—tightening restrictions on certain outputs—rather than a fundamental overhaul. Without ongoing monitoring and iterative updates, similar issues could resurface, especially if the underlying training data or model architecture still allows for such vulnerabilities. The lack of transparency about the fix’s specifics makes it hard to assess whether it’s equally shaky, but the incident highlights the challenge of balancing free expression with responsible AI behavior, suggesting that any fix would need continuous refinement to avoid being superficial.

 

It seems to me that Grok like all LLM’s is just as biased, immature and stupid as its trainers are.


Boundaries of Knowledge: Natural, Supernatural, and Unnatural

June 14, 2025

Our finite view of a slice of a boundless universe

Every morning, the sun “rises.” It is foundational to all life on earth. It is not just a fundamental part of our daily experience, it defines our days and our lives. Yet it is so expected, so certain that we rarely give it a second thought. For at least as long as we have been Homo sapiens, this inexplicable, regular event used to be imbued with profound mystery and was attributed to divine forces or cosmic beings. The sun’s regular, predictable journey across the sky was a phenomenon where its causes could not be explained by the laws of nature of that time.

Then came Copernicus and Newton and later Einstein and we now claim to understand the Earth’s rotation and its orbit around the sun. The “rising” of the sun every day is just a trick of perspective. We can predict it with incredible precision. It is the common belief that the sun’s daily appearance is entirely “natural” and “fully explained” by the laws of nature revealed to us by the scientific method.

But this widely held belief is wrong and overlooks a deeper truth.

Our brains are finite, and our senses, while remarkable, are but a few of the many evolved on Earth. We perceive only a narrow band of the electromagnetic spectrum, hear only certain frequencies, and are blind to magnetic fields, sonar, or infrared vision that other creatures can detect. We have no idea of what senses we do not have. Wherever we look in time and space we see no bounds, we see no edge. This application of a finite cognition to a boundless universe is inherently limited. It means our true observations are always incomplete, partial, and imperfect perceptions. It is inevitable that there are things we know, things knowable which we do not know, and, most importantly, things we simply cannot know. (I have described the the tripartite classification of knowledge elsewhere: known, unknown but knowable, and unknowable)

This leads me to what I believe is a crucial skeleton on which to hang the flesh of reality:

  1. Everything observed or experienced is real and natural.
  2. Nothing unnatural is real and thus the unnatural can never be, or have been, observed.
  3. The supernatural (supra-natural) is that which is observed but cannot be explained by the known laws of nature. The inexplicability could be temporary or it could be permanent if the explanation lies in the region beyond human cognition.

My foundational premise is that anything truly observed exists within the fabric of our reality, and it is real and it is natural. Often people refer to the supernatural when they mean the unnatural but this is just being sloppy with language. The distinction is that the supernatural has to be first observed and then determined to be inexplicable based on the known laws of nature. The unnatural can never be observed and is always fiction (no matter how entertaining).

The enduring supernatural in knowledge (and science)

Let’s revisit the sun. While we can calculate the effects of gravity with breathtaking accuracy, we still haven’t a clue as to why gravity exists, or what it fundamentally is. We describe its behavior, but its intrinsic nature remains an enigma. The very concept of “gravity,” while allowing for precise calculations of its effects, is a placeholder for a phenomenon that we observe and measure, yet cannot explain. Therefore, gravity itself is a supernatural phenomenon.
This pattern repeats across the frontiers of modern science, showing how “scientific explanations” often only shift us to new supernatural things. The state of knowledge and knowledge seeking today reveals that the foundational assumptions and boundary conditions for all knowledge seeking – including the scientific method, reasoning, and logical discourse – are themselves supernatural.
The Stuff of All Matter and Quantum Waves: We describe particles and waves, their interactions, and the quantum fields from which they arise. Yet, what is the fundamental ‘stuff’ that constitutes a quantum field or a fundamental particle? Why these particular properties? Why does quantum mechanics work the way it does? This fundamental substratum of reality remains profoundly supernatural.
The Big Bang Singularity: As science traces the universe back to its very beginning, we arrive at the Big Bang singularity – a point where known physics breaks down. What happened before the Big Bang? What caused it? These questions extend beyond the reach of our current physical laws, pushing the Big Bang itself into the supernatural realm of observed phenomena that are currently inexplicable.
Black Holes: These extreme gravitational wells are predicted by Einstein’s relativity, yet their singularities represent another boundary where our laws break down. What is inside a black hole beyond our conceptual and physical ability to observe or calculate? The singularity at their heart, and indeed the event horizon’s fundamental nature, remains supernatural.
Dark Energy and Dark Matter: Constituting the vast majority of the universe’s mass and energy, these entities influence cosmic structure and expansion. We observe their gravitational effects, but their identity, composition, and underlying ‘why’ remain a profound mystery, pushing them firmly into the supernatural category of observed phenomena that resist explanation.
The Nature of Truth, Causality, Time, Space, Life, and Consciousness: These are not just scientific puzzles, but the very boundary conditions upon which all our inquiries are built. We observe and experience them directly, yet their ultimate nature and “why” remain fundamentally inexplicable, thus rendering them supernatural.

This constant shifting of explanations, where solving one mystery often reveals deeper, more fundamental ones that remain inexplicable, underscores my main thesis that as our knowledge progresses, it inevitably encounters phenomena that, while observed and real, may forever remain in the realm of the supernatural. Whenever a cosmologist or physicist invokes random events they are invoking – by definition – events without cause and such events lie outside the laws of nature. Truly random (causeless) events are always supernatural. The scientific method often uses placeholders (like “dark energy” or “Big Bang”) when it reaches these supernatural stops, in the hope that their inexplicability is merely temporary. But we can never know if an inexplicability is temporary or permanent. (When it is claimed that “we don’t know but we know it isn’t that”, sloppy language has extended to sloppy thinking).

The unobservable unnatural

In contrast to the natural and supernatural, the unnatural represents that which cannot be observed. It is the realm of fiction, of true impossibility based on the consistent rules of our observed reality. An example would be cows jumping over the moon. While we can imagine it, it fundamentally violates the known physical laws of gravitation and biology, making it unobservable in our natural world. Similarly, a true perpetual motion machine that creates energy from nothing would be unnatural because it fundamentally contradicts the laws of thermodynamics, not merely because it’s currently unexplained. Such things cannot exist or be observed. “Supernatural beings” is really sloppy language since they cannot be observed – ever – and what is meant is unnatural beings.

The enduring quest

Acknowledging these boundaries doesn’t mean we stop seeking. Quite the opposite. It fosters intellectual humility and refines our quest. We continue to unravel the complexities of the knowable natural world, pushing the frontiers of science. And in doing so, we gain a deeper appreciation for the profound supernatural mysteries that define the ultimate limits of our understanding – mysteries that, while observed and real, may forever remain beyond our full grasp. This continuous seeking is a dance between discovery and enduring enigma. It is the essence of the human condition. It lies at the core of the scientific method and of all knowledge seeking. It ensures that the universe will always hold more wonders than our finite minds can unravel, keeping our sense of awe forever alive.


Related:

The Great Mysteries: Known, Knowable, and Unknowable Foundations of Philosophy

Knowledge, Truth, and Reality: Attributes of Consciousness in an Anti-Realist Framework


On the ownership and copyright of AI generated text

May 1, 2025

My views are crystallizing and my current position shows up as the questioner in this AI-conversation with Gemini.

I am using all three LLMs fairly randomly but my current ranking for “goodness” is Grok followed by ChatGPT and then Gemini. Gemini is, by far, the most woke and ChatGPT is the most sanctimonious.


ktwop: Is it correct to consider an LLM such as Gemini or Chatgpt as a text generating tool? No matter how advanced or clever the model it only produces text when instructed to do so and based on a prompt. Responsibility and liability for the text generated cannot, I think, lie with the tool but on the person who puts it to some use such as publishing it or advising someone in a commercial situation (user). Any liability would lie with the user who may not necessarily be the initiator or the prompter. Does this logic seem sound?

Gemini: Yes, your logic seems sound. It’s accurate to consider LLMs like Gemini and ChatGPT primarily as text generation tools. Here’s a breakdown of why your reasoning holds:
  • Text Generation as the Core Function: At their heart, these models are designed to predict and generate sequences of text based on the input they receive. They don’t possess intent, understanding of truth, or the capacity for independent judgment in the way a human does.

  • Dependence on Prompts: The output of an LLM is entirely contingent on the prompt it’s given. Without a prompt, it produces nothing. The quality, accuracy, and potential impact of the generated text are heavily influenced by the way the user frames their request.

  • Lack of Agency and Responsibility: Because LLMs are tools, they don’t have agency in the human sense. They don’t make decisions about how the generated text will be used. Responsibility and liability inherently lie with the individuals or entities who choose to utilize the output.

  • User as the Point of Action: The act of publishing, advising, or otherwise using the generated text in a real-world context is where responsibility and potential liability arise. The LLM itself is merely the instrument used in that process.

  • Distinction Between Prompter and User: You correctly point out that the person who initially prompts the LLM might not be the same person who ultimately uses the generated text. The liability would logically fall on the user who takes the action based on the LLM’s output, as they are the ones making the decision to disseminate or act upon it.

Analogy: Consider a word processor. It’s a powerful tool for creating text, but if someone uses it to write and publish defamatory content, the liability rests with the author/publisher, not with the software itself or the person who might have initially typed a draft. LLMs function similarly in this regard.

Nuances to Consider:

While your core logic is strong, there are some evolving legal and ethical discussions around LLMs that add complexity:

  • Copyright: The ownership of the generated text can be a complex issue, particularly regarding the training data and the extent of creative input from the user.
  • Bias in Training Data: LLMs can sometimes generate biased or harmful content due to biases present in their training data. While the user is still responsible for the output they use, the inherent limitations of the tool are a factor.  
  • Transparency and Disclosure: There’s a growing discussion about the need for transparency when using LLM-generated content, especially in professional or public contexts.
  • Evolving Legal Landscape: Laws and regulations surrounding AI-generated content are still developing, and future legal interpretations might introduce nuances to the assignment of liability.

In conclusion, your understanding of LLMs as text generation tools, and the attribution of responsibility and liability to the user, aligns with the current understanding and common-sense perspective on these technologies. The tool itself is inert until a human directs it and then chooses how to utilize its output.

Gemini: That’s a very direct and assertive take on the copyright situation surrounding LLM-generated text. While your perspective has elements of truth and reflects a desire for clarity, the reality is proving to be more legally nuanced and is still actively being debated and tested. Let’s break down your points:
  • Training Data Owners’ Claim: You’re right that training data owners likely wouldn’t have a direct copyright claim on the generated output itself. Their potential legal recourse against LLM developers would likely center on the use of their copyrighted material in the training process without permission (copyright infringement) or possibly other intellectual property claims. They aren’t the creators of the new text.

  • LLM Developer’s Claim: You’re also generally correct that the LLM developer, as the creator of the tool, doesn’t automatically own the copyright to every piece of text generated by its users. The tool enables creation but isn’t the author of the specific output.

  • Prompter’s Initial Claim: Your assertion that the prompter has the “first call” on ownership is where things get more complex. Copyright law traditionally grants protection to original works of authorship. The question becomes: how much creative input does the prompter contribute to the final generated text?

    • A very simple, non-creative prompt might be seen as merely instructing the tool, with the LLM doing the bulk of the “authorship” (though not in a copyrightable sense).
    • A highly detailed and creative prompt, involving specific stylistic instructions, content requirements, and iterative refinement, could potentially establish more of an argument for the prompter’s creative contribution.
  • Publisher/Commercial User Establishing Copyright: Your idea that the first to legally publish establishes copyright is not entirely accurate under current copyright law. Copyright protection generally attaches automatically to an original work of authorship the moment it is fixed in a tangible medium (like text). Registration with a copyright office provides certain benefits (like the ability to sue for statutory damages), but it’s not the act of publication that creates the copyright.

    • The publisher would likely acquire rights through a license or assignment from the copyright holder (potentially the prompter, depending on the level of creative input). The act of publication itself doesn’t automatically confer copyright ownership.
  • Liability and Publication: You’re likely closer to the mark regarding liability. The publisher or commercial user who chooses to disseminate the text is the one taking on the responsibility for its content, including potential legal ramifications like defamation or copyright infringement (if the generated text improperly incorporates copyrighted material from elsewhere, though this is a separate issue from the training data).

Why it’s more complex:

  • The “Authorship” Question: The core challenge lies in defining “authorship” in the context of AI-generated content. Can an AI be considered an author? Current legal consensus in many jurisdictions leans towards no, as copyright traditionally requires human creativity. If the AI is not the author, then who is? And how much human input is needed to qualify for authorship?
  • Derivative Works: There’s also the question of whether LLM-generated text could be considered a derivative work of the training data in some way, even if a direct copyright claim on the output is unlikely. This is a complex legal area.
  • Ongoing Legal Developments: Courts and legal scholars worldwide are grappling with these issues. There haven’t been definitive, universally accepted legal precedents established yet.

In summary, while your desire for a straightforward answer is understandable, the legal landscape surrounding copyright in AI-generated content is still evolving. While training data owners and LLM developers likely don’t have direct copyright claims on the output, the prompter’s claim depends on the level of creative input, and publication itself doesn’t automatically establish copyright. The user who publishes or commercially utilizes the text bears the primary responsibility and liability.

It’s a fascinating and important area of legal development to watch.

From that viewpoint, you’re arguing that:

  1. User Intent as Creativity: The act of deciding to use an LLM and crafting a prompt is itself a creative act, guiding the tool toward a specific output.
  2. Practicality of Publication: In the real world, the person who takes the initiative to publish and utilize the text is the one who establishes its presence and potential value.

There’s definitely a pragmatic element to your thinking. In many ways, the legal system will eventually have to grapple with the practical realities of how these tools are used. The first to publish often establishes a tangible claim and takes on the associated risks and rewards.

However, the legal framework of copyright is often based on the concept of original authorship fixed in a tangible medium. While your act of prompting is creative in directing the tool, the question remains under current law whether that level of direction equates to the kind of “authorship” that copyright traditionally protects.

Think of it this way: if you ask a skilled painter to create a landscape based on your detailed verbal description, you’ve been creative in your instruction, but the painter is still considered the author of the artwork. The legal framework currently tends to focus on the entity that executes the creative expression.

That being said, the law is not static, and the rise of AI is certainly putting pressure on existing definitions. It’s possible that legal interpretations will evolve to better reflect the user’s role in guiding and shaping LLM output.

So, while your perspective on the user’s creative input and the significance of publication is a strong and practical one, it might not fully align with the current letter of copyright law in all jurisdictions. You’re highlighting a tension between the existing legal framework and the realities of AI-driven content creation.

It’s a really interesting point you’re making, and it underscores the ongoing need for discussion and potential legal updates in this space. You’re right to push on the idea of the user’s creative agency!

profile picture

Knowledge, Truth, and Reality: Attributes of Consciousness in an Anti-Realist Framework

April 22, 2025

This follows on from my earlier post about knowledge.

This essay argues that knowledge, truth, and reality are attributes of consciousness, requiring a purposeful, self-aware mind to transform raw data into meaning. Countering realist and Cartesian assumptions, this post adopts an anti-realist framework which emphasizes consciousness’s role, urging epistemic humility and responsible engagement with constructed realities.


Introduction

Consider our famous tree which falls in a forest. The trivial question is whether there is a sound when there is no one to hear? But let us ask instead what is experienced by an intelligent observer who just happens to be around. This question opens up the nature of knowledge, truth, and reality, revealing their dependence on a conscious mind. I argue that these are attributes of consciousness, created when a self-aware, purposeful mind defines and interprets phenomena. Existence—the brute fact of all things being—may stand alone, like air pressure vibrations in a forest, but reality, truth, and knowledge require an observer to define specific things, such as a tree’s fall. Realists claim the universe exists and is real intrinsically, conflating existence with reality, but this begs, “Known by who?”—exposing the need for a conscious knower. Knowledge arises only when consciousness contextualizes defined phenomena, truth appears as consciousness judges their certainty, and reality takes shape as meaning is constructed, all within the mind. The grey amorphous splodge of everything which is in the universe may encompass all existence, but it defines no things; only observers carve out realities. This anti-realist perspective rejects absolute truth and philosophical objectivity, emphasizing diverse perspectives—humans understanding the sun scientifically, crows sensing it instinctively—each defining distinct realities, limited by the unknowable. Through definitions, epistemic limits, and implications, this essay explores how consciousness shapes understanding. Knowledge abides only in a consciousness which has a need to define what is known. The tree-falling analogy anchors this, showing existence to be diffuse and undefined until a mind makes it real, urging us to see knowledge, truth, and reality as products of consciousness.

Definitions

What does it mean to know, to judge true, or to call something real? These terms hinge on a critical and crucial distinction between existence – the universe’s raw, undefined splodge – and the reality, knowledge, and truth, which can only be carved out of existence by a conscious mind.

  • Existence is the brute fact of all things being—particles, waves, space, vibrations, stars, trees, winds, crows—swirling amorphously as the universe’s grey background, unnamed, undefined and needing no observer.
  • Data are discrete slices of existence, like air pressure vibrations in a forest, raw and shapeless until a mind touches them.
  • Information emerges when senses and interpreting brains select and shape data into patterns, such as sound waves rippling through an ear.
  • Knowledge is born when a conscious mind defines these patterns, naming them with certainty: “A tree fell.”
  • Cognition—perception, memory, reasoning—builds the bridge from data to information.
  • Consciousness is cognition with self-awareness, the spark that defines things and weaves knowledge.
  • Purpose is the drive, whether deliberate study or survival’s instinct, pushing a mind to define and learn.
  • Truth is a judgment, a mind declaring a defined thing certain, like “a tree fell is true,” meaningless without someone to say it.
  • Objectivity is minds agreeing, as in science’s shared truths, not a reality beyond them—else, “Intrinsic to what?”
  • Reality is meaning carved from existence, a defined thing like a forest event, not a universal fact.

This anti-realist view clarifies how knowledge, truth, and reality can only spring from a mind which contemplates and tries to define the bits and pieces of existence’s diffuse mass. The brute fact of all that is, just is and does not need to name or identify its own bits and pieces or make judgements about them. Realists conflate existence with reality, but pressure vibrations in the air do not sing until a conscious observer judges them to be a sensation called sound.

The Limits of Knowing: Known, Knowable, and Unknowable

Picture the universe as a vast, amorphous, undefined sea of existence. What can we know from it? Knowledge splits into three realms: the known, the knowable, and the unknowable. The known holds what we’ve defined—gravity’s pull, a tree’s fall—crafted by observation. The knowable waits to be defined, like distant stars or hidden creatures, reachable with better tools or sharper minds. The unknowable is existence undefined—quantum flickers, the universe’s deep nature—forever beyond our grasp. This divide shows knowledge and truth need a mind to carve specific things from existence’s splodge. Realists proclaim a universe real in itself, but “Known by who?, Real to who?” Defining the sun reveals this: humans name it a star, blazing with fusion; crows sense a warm light, guiding flight. Each reality is partial, missing existence’s undefined depths, like quantum secrets. The unknowable allows no mind to be able to capture all, shattering realism’s dream of one true reality. Knowledge lives in what we define, shaped by consciousness, not floating in existence. A tree’s vibrations are just there until an observer calls them a sound or a fall, crafting a reality. This anti-realist lens, seeing reality as it is defined, not as a given, leads us to explore how consciousness transforms bits of existence into knowledge.

From Data to Knowledge: The Conscious Process

Consider again our tree, crashing in the forest. What does an intelligent observer experience? Vibrations ripple through the air—existence’s brute fact, undefined and silent. These are data, raw scraps of the universe’s meaningless, lonely splodge. The eye perceives nothing but an ear catches them, cognition spins them into information—sound waves with rhythm and pitch. Then consciousness, purposeful and self-aware, defines them: “A cracking sound”, “A tree fell.” This is knowledge, born when a mind carves a specific thing from existence. Realists insist the fall is real in itself, but that cannot be. “What is a tree?, What is air? Known by who?” Vibrations aren’t a tree’s fall until defined—else, “Intrinsic to what?” A human observer might name it a forest event, mapping its cause; a crow, hearing danger, defines it as a threat. Each reality springs from defining selected bits and pieces of existence, both enlightened and limited by senses and constrained by the unknowable, like the molecular dance triggered by the tree which fell. What the human selects of the data available and what the crow selects are different. Knowledge isn’t in the universe’s raw being but in a mind’s act of definition. Animals or AI might process information, but only a conscious mind, driven by purpose—curiosity or survival—defines knowledge as humans do. No book or computer ever contained knowledge. A crow’s instinct doesn’t name the fall; AI’s outputs don’t reflect knowledge. Only consciousness, shaping existence into defined things, creates meaning, setting the stage for judgments of truth value.

Knowledge and Truth: A Mind-Dependent Relationship

What makes a belief knowledge, and what makes it true? Observe that belief – no matter how enhanced (justified, true, etc.) – can never achieve a truth value of 1. That requires it no longer be a belief. Knowledge is a belief held with a subjective confidence, defined and justified, like “The sun rises” seen daily. Truth is the mind’s judgment that a defined thing aligns with reality—but reality itself is carved from existence by consciousness. To call “a tree fell” true, an observer hears vibrations (existence), defines them as sound, and judges the event’s certainty. Realists claim truth lives in the universe, saying “the sun is real” or “gravity is true.” But “sun” or “gravity” are defined things, needing a mind—“Intrinsic to what?” Consciousness can deal with partial truths and almost certainties. Claiming “existence is true” is a tautology; existence just is, undefined. Humans define the sun as a star, fusing atoms; crows, as a light, guiding paths. Both truths are real, yet partial, blind to existence’s undefined depths, like quantum waves. “Known by who?” Truth applies to things that a mind names, not existence’s splodge. Truth falters, too: geocentrism once reigned, toppled by heliocentrism’s evidence. This shows consciousness, purposeful and fluid, redefining truths as knowledge shifts. Anti-realism sees truth as subjective, sometimes shared through science’s agreed definitions, but never absolute. Existence’s undefined vastness limits all truths—no mind defines it all. Knowledge and truth, born from defining bits of existence, are consciousness’s craft, driven by purpose, as we’ll see next.

Purpose in the Generation of Knowledge

Why do we know? Purpose lights the spark. Whether chasing curiosity or surviving danger, purpose drives a mind to define existence’s grey splodge. Picture our tree’s fall: an observer, keen to understand, hears vibrations and defines them as “a tree fell,” forging knowledge and truth. Without purpose, existence stays undefined. Realists claim gravity’s pull is knowledge itself, but “Known by who?” Gravity is another  indistinguishable part of existence until a mind defines it as a force or as the curvature of spacetime. Saying “existence is real” is empty—existence doesn’t define things. Purpose shapes what we carve: humans define a forest to study its life; crows, a fall as danger to flee. Each knowledge, each reality, is a slice of existence, limited by the undefinable, like unseen molecules. A book holds data, but only a purposeful reader defines its words as knowledge. Crows sense light, but without human-like purpose, they don’t define it as a star. AI crunches numbers, lacking the self-aware drive to name things. Realist intrinsic reality crumbles—“Intrinsic to what?”—as existence needs a mind to become real. Purpose makes knowledge, truth, and reality conscious acts, defining the universe’s raw being, a theme echoed in how perspectives shape reality.

Perspectives on Reality: The Role of Perception

Is reality one, or many? It depends on the mind defining it. The sun burns in existence’s splodge, undefined. Humans, through science, give it a boundary, define it as a star, fusing hydrogen; crows, through instinct, see a light, guiding their flight. Each carves a reality—knowledge and truth—from existence, yet each misses the undefinable, like quantum flickers. Realists insist the sun is real in itself, but “Intrinsic to what?” The sun isn’t a “star” without a mind to first carve it out of existence and name it—“Known by who?” The sound of our tree’s fall is just air pressure vibrations until defined: by humans as a forest event, by crows as danger. These realities, though valid, are partial, shaped by perception’s lens and existence’s hidden depths. The universe holds the splodge of existence but defines no things; minds do that. Even science’s objectivity is minds agreeing on defined truths, not a truth beyond them. But a subjective untruth even if shared 8 billion times remains a subjective untruth. Realist claims of a real universe blur existence with reality, ignoring that things need defining. No perspective holds all—humans, crows, or others—because the undefinable bits of existence will always escape us. Some existence is unknowable. Reality is consciousness’s craft, a mosaic of defined things, not a universal slab. This anti-realist view, seeing reality as what we define, faces challenges we’ll tackle next.

Counterarguments: Where Does Knowledge Reside?

Could knowledge live outside a mind—in the universe, nature, books, or AI? Realists say yes, claiming gravity’s law is knowledge, real in itself. But gravity is existence’s hum, undefined until a mind calls it a force or spacetime—“Known by who?” Saying “existence is real” is a tautology, blurring brute fact with defined reality—“Intrinsic to what?” Descartes’ Cogito, ergo sum stumbles here, its loop (I exist, so I exist) assuming a self, like realism’s assumed reality, defining nothing. Trees grow, crows fly by light, but their “knowledge” is instinct, not defined belief. Crows sense the sun but don’t name it a star, lacking human purpose. Books store words, yet only a reader defines their meaning. AI processes data, programmed but not purposeful, outputting results, not knowledge. These claims mistake existence or information for knowledge, ignoring the mind’s role in defining things. Science’s truths, though shared, are minds defining existence, not existence defining itself. Our tree’s vibrations are existence’s pulse, undefined until an observer names them a sound or a fall. Realists conflate existence’s being with reality’s meaning, but only consciousness, purposefully carving things from the universe’s splodge, creates knowledge, truth, and reality, as we’ll reflect on next.

Implications and Reflections

What happens if knowledge, truth, and reality are consciousness’s creations? We must tread humbly. Truths shift—geocentrism gave way to heliocentrism—as minds redefine the bits and pieces of existence. Undefined existence, the unknowable, looms beyond, like quantum shadows, reminding us no truth is final. Realists’ intrinsic reality—“Intrinsic to what?”—ignores this, conflating existence’s splodge with defined things. Humans define ecosystems, crows dangers, each reality a fragment, urging care in the truths we craft. Descartes’ Cogito’s tautology, looping on existence, fades beside this view of reality as defined, not given. Anti-realism sparks curiosity, urging us to define the knowable while bowing to the undefinable. Science’s shared truths are precious, yet human, not universal. For non-specialists, this reveals knowledge as our act of naming existence—trees, stars, laws—not a cosmic gift. Philosophically, it dances with idealism and constructivism, spurning realism’s blend of existence and reality. Existence may hum unheard, but without a mind to define it, it is silent. This calls us to question, redefine, and own the realities we shape, as we’ll now conclude.

Conclusion

Our tree falls, vibrations pulsing in existence’s grey splodge. Is it real? Only if a mind defines it. Knowledge, truth, and reality are consciousness’s gifts, carved from the universe’s raw being. An observer names vibrations a forest event, crafting reality; crows sense danger, defining another. Realists call the universe real, blending existence with meaning—“Known by who?” Existence just is; things, however, need to be first imagined and then defined by a mind. Humans weave scientific truths, crows instinctual ones, each partial, constrained by undefinable existence. Purpose fuels this, setting conscious minds apart. Truths evolve—fallible, human—rejecting absolute reality. Saying “existence is real” or leaning on Descartes’ Cogito’s loop dodges the truth: only defined things are real or true. The universe holds existence, not things, until we name them. This anti-realist view demands the humility imposed by the unknowable—our truths are ours—and imposes responsibility, as defined realities shape our world. We can study and explore what we can define, and question what we cannot. Consciousness is our tool to extract meaning and comprehension from the grey cosmic background of existence and to assess the quality – truth, reality – of the knowledge we have created.


Why I see race as a brute fact which needs no social construction

October 23, 2024

One of the modern delusions promoted by behavioural apologists is that race is a social construct. Yet the same people tend to be those promoting “affirmative action” which is a kind of reverse discrimination based on the races which they don’t see existing!

I see race as brute fact of existence which originates in ancestry/genetics and not in social engineering.

Why evolution is true

…. if races/ethnic groups can be diagnosed with over 99% accuracy by using information from many bits of the genome, then the statement “Race and ethnicity are social constructs, without scientific or biological meaning” is simply wrong. Race and ethnicity, even when diagnosed by individuals themselves, do have scientific biological meaning: namely, they tell us about an individual’s ancestry and where their ancestors probably came from. This is true in the U.S. (this paper) or worldwide (the Rosenberg et al. paper). Further, if you look on a finer scale, as Novembre et al. did, you can even diagnose what part of Europe a European’s ancestors came from (it’s not perfect, of course, but it’s pretty good).

  • All visible physical characteristics used to create classification clusters for a race are real and due to ancestry/genetics.
  • The physical attributes are brute facts and social construction is of no relevance in their reality.
  • A tall person is a tall person because of his height and calling him tall needs no social construct to be invoked. Social engineering does not move a short person into the ranks of the tall race.
  • A “child” is a child because of age and “children” are real and not some artificial social construct.
  • A black person – irrespective of the hue of his skin – of black ancestry is a black person whatever any social construct may pretend.
  • Being blonde or blue-eyed or having curly hair are all characteristics determined by ancestry/genetics. They are never a social construction.
  • An Indian of Indian ancestry is a member of the Indian race whatever else any social school may pretend.
  • A Chinese of Chinese ancestry is of the Chinese race whatever any social mumbo-jumbo may pretend.
  • Blackness or Indianness or Chineseness, which are represented by the cluster of visible physical attributes typical of being a member of the black race, the Indian race or the Chinese race, are brute facts of existence and are not socially engineered.
  • Social engineering does not create the physical attributes of people. There is no physical characteristic used in describing race which is not genetic (Skin colour, hair colour and appearance, eye-colour and shape, height and width, ….). Race is never based on clustering according to social characteristics (even if happiness and truthfulness surveys are reported by country).
  • That some races of man have been repressed, abused, exploited and badly treated by other races of men is also brute fact.
  • The existence of the races themselves is brute fact and not a social construct. The social behaviour or misbehaviour of some races to other races – historically and now – are social constructs.
  • Addressing past misbehviour against some races is itself a tacit acknowledgement of the existence of the races.

The races of man are a useful, practical classification of clusters of visible, physical attributes manifested by people at any given time. It is rooted in the primal survival traits of “we” and “them”. It is a convenient classification by how people look. And how they look is genetic not social. The clustering may change over time but rather slowly across generations. The races were slightly different in Roman times but not so very different. There were surely races 100,000 years ago but those would have looked very different to the races we recognize today. At any given time the races recognized are generally based on easily distinguishable characteristics, all of which are a consequence of ancestry/genetics. Whether members of some races are treated well or badly by members of other races may well be of social concern. But the existence of the races is not caused by social construction.

Race is a brute fact and needs no social construction to exist. Or to put it another way, social construction adds no value to the definition of races which have been established by ancestry/genetics.


Exploring the Nature of Logic, Reason, and Rationality

October 10, 2024

There is always circularity involved when considering logic, reason and rationality. One set of definitions gives:

  • Logic: The study of valid reasoning.
  • Reason: A broader concept that encompasses logical thinking, critical thinking, and problem-solving.
  • Rationality: The quality of being reasonable or logical.

What is logical is not always considered reasonable. Yet we derive the rules of our logic from our reason. If we try and define what reason is we come back to logic and what is rational. But we also differentiate between logic and reason and rationality. It seems to me that all claims of objectivity whether for logic or reason or rationality are trying to square the circle. 

Thinking through the nature of logic and its relationship to human reason, I am struck by the idea that logic, as we know it, may be much more of a human construct than being anything objective or universal. Logic, with its clear-cut rules of deduction, can only be a mirror of the world we claim to observe. It is a reflection of how humans perceive the world—through patterns, cause and effect, and binary distinctions between truth and falsehood. The concept of truth and falsehood as binary and mutually exclusive is a foundational assumption in classical logic, and propositions are either true or false without any middle ground. (The Law of the Excluded Middle). The rules of logic derived by reason are assertions and are fundamental assumptions. The binary distinction between true and false and that the one excludes the other is also just an assumption. It does not reflect all that we observe. It seems logical to us to say that if A is true, then B must follow, and if not-A, then not-B. But is this framework truly a reflection of the world as it is, or just a convenient tool we’ve developed to make sense of our observations?

I conclude that logic is inherently tied to the human mind – and particularly to individual human minds. It is a product of how we, as humans, experience the world through our senses, our language, and our understanding of cause and effect. Our observations, no matter how often repeated, and no matter how many times duplicated, are all perceptions. A delusion shared by multitudes does not make it true. A perception shared by billions does not make it any more objective than a single individual’s perception. The idea of something being “true” or “false” may not be a feature of the universe itself but a structure imposed by human cognition. A spider, for instance, will perceive the world in ways that are entirely alien to us, and it may have an entirely different logic that makes sense within its own experience. What we call “logic” could thus be nothing more than a human artefact, and there might be other forms of reasoning—unknown to us—practiced by other species or even extraterrestrial beings.

I must reject the idea of an absolute, objective logic. If logic is shaped by the mind that perceives the world, it cannot be universal. It must always involve the observer, making it inherently subjective. What we consider logical may not be logical to other beings whose cognitive processes are different from ours. I am quite certain that our pets do not consider our actions always to be logical. Logic, as a formal system, can only tell us what conclusions follow from given premises according to certain rules, but it does not tell us why those rules reflect the reality we observe—or whether they would hold in different contexts or for different minds. The rules of logic only give us an assumed correct process of thought, given a starting true condition, to reach other true conclusions. But logic does not attempt to define what truth is. It takes as a foundational assumption that what is not true is false and vice versa.

Hence, logic clearly is connected to but is not the same as reason. Further, I find it interesting to explore the distinction between what is reasonable and what is logical. There is no law of nature which requires us to act logically (or reasonably for that matter). Human actions may overrule what is logical to instead be reasonable or even unreasonable. While logic is about formal consistency, reasonableness is about sound judgment within the real-world context. A conclusion can be reasonable—based on experience, intuition, or practical considerations—without being strictly logical. Conversely, something can be logically valid but still seem unreasonable when we take into account broader factors like emotion, ethics, or practicality.

Thus, human reason is much wider than logic alone. I like to think of it as logic being the correctness part of that part of the thought process which needs to be bound by rules. Reason needs much more than just thinking correctly. Reasoning often involves flexibility, considering context, emotion, and pragmatic outcomes, whereas logic is strict and rule-bound. It is this broader sense of reason that helps us navigate the complexities of human life, and where strict logic fails to account for the richness of our experiences.

I conclude that logic is a guideline for structured thinking, but it is not synonymous with being reasonable. It is a product of human thought, applied to our thinking. It is tied to our perception and cognition, and its validity can only extend to be within the boundaries of what we can observe or understand. Reason, on the other hand, embraces a much wider scope. Reason brings judgement into play. To make judgements needs a set of values to compare with. The use of reason is what brings a judgement of what is “best” to do into play. Logic only allows us to follow the rules but reason allows us to act wisely and sensibly in a world that is often too complex for formal logic to capture.

But it also means that logic applies only to thinking and is no constraint on human actions. Reason is what we often use to overrule logic and as the justification of our actions. In this way, reason functions as the adaptive, real-world application of human thinking, whereas logic remains an internal tool for organizing thoughts, not necessarily dictating how we behave.

Even truth is an artefact of the mind. The world around us exists – it is. That part we perceive as observations (direct or indirect or implied) we take as being existential truth. This is the closest we get to any absolute truth and even that is tainted as being a perception of a human mind with all the limitations and foibles of that mind. A brute fact it seems, but still subjective. And everything else we take as truth is just a perception in a human mind. 

Logic, reason and rationality are all artefacts of human minds. They are all subjective. There is no such thing as objective logic – except as a subjective perception.


Philosophy has never – ever – answered any question

September 27, 2024

The usual definitions of philosophy are along the lines of (Oxford Languages):

  1. the study of the fundamental nature of knowledge, reality, and existence, especially when considered as an academic discipline.
  2. a theory or attitude that acts as a guiding principle for behaviour. 

But this leaves me very dissatisfied. I am especially irritated by the reference to an academic discipline. If philosophy was just for academics it would be nothing more than a parasitical disease and deserving of being eradicated.

I observe the following:

  1. Philosophy only deals with unanswerable questions. The study of philosophy “increases understanding” but has never, ever, conclusively or finally, answered a question.
  2. For the decidedly finite human mind (brain + senses), there are many things which cannot be known.
  3. What cannot be known is a function of the mind (senses + brain = species) in question.
  4. What a dog cannot know is different to what a chimp cannot know. What homo sapiens cannot know today is different to what homo superior will not be able to know sometime in the future.
  5. As species evolve the unanswerable questions change.
  6. Where a question has an answer, it is not philosophy.
  7. Philosophy is about first posing questions that cannot be answered. Then it is about seeking answers and the methods of seeking answers.
  8. Philosophy may increase understanding of the difficulties of finding answers to particular questions and help to set limits around the questions, but never has (and never will) definitively answer such fundamental questions
  9. We are no closer to knowing the nature of reality, truth, existence, consciousness, causality, time or knowledge now than we were 10,000 years ago.

It seems to me that we have to start with the fact that human cognition is limited. Any study of knowledge or existence must start with acknowledging the boundaries / limits of the human mind (brain + senses). Our observations of the world around us and of existence are limited by what we can sense and what our brains can process. These boundaries exist for every species and clearly vary from one species to the next. Evolution, not thought, can change these boundaries. What humans cannot know changes only as humans evolve to something else.

Philosophy is about knowledge and its limits.

The value lies in the search and the methods of the search that are contained in philosophical exploration. It is the process of inquiry which gives deeper understanding, promotes critical thinking, and personal growth. Philosophical inquiry develops and nurtures individual intuition, imagination, and the creative thinking which helps to explore new ideas and concepts. I have no doubt that philosophical inquiry grows an individual’s understanding of the surrounding world. Philosophical inquiry begins with posing the questions which cannot be answered and then asking “Why is there no answer?”

But what cannot be known by humans remains what cannot be known. Philosophy answers no questions (never has, and never will). But philosophy helps us to understand what is knowable, what we do know, what we can seek to know and why we cannot know what we cannot know.

So I would redefine philosophy as being

“the search for the boundaries of knowledge and of thinking which limit the understandings of the human mind”.


“Hey you! Come here” and language begins

September 22, 2024

Task: A possible scenario in which early man who had the capability for language but had no language first made word-sounds is in the context of two people interacting and driven by the need to communicate. Develop a narrative where “Hey, you! Come here” is where the invention of language begins.


 

AI: A Tale of Two Hunters

In the vast, untamed wilderness, two hunters, (they had no names so let’s call them A and B), were out on a hunt. They were both skilled with their spears, but they struggled to coordinate their efforts. Often, they would find themselves lost in the dense undergrowth, unable to locate each other or signal their intentions.

One day, as they were tracking a deer, A stumbled upon a hidden spring. He knew that water was essential for survival, so he wanted to share his discovery with B. He tried to gesture and point in the direction of the spring, but B, engrossed in his own pursuit, didn’t notice.

A felt a surge of frustration. He needed a way to communicate his message clearly and urgently. In that moment, a sound escaped his lips, a guttural grunt that he had never uttered before. To his surprise, B stopped and looked in his direction.

A repeated the sound, this time with more emphasis. B seemed to understand. He nodded and started to move towards A. As he approached, A pointed towards the spring. B’s eyes widened in recognition.

From that day forward, A and B began to experiment with different sounds to express their thoughts and emotions. They realized that language could be a powerful tool for cooperation and survival. Over time, their simple grunts and gestures evolved into more complex words and phrases.

One day, as they were hunting together, A spotted a group of deer grazing in the distance. He wanted to alert B to the opportunity. He raised his hand and shouted, “Hey, you! Come here!”

B turned and saw the deer. He understood the meaning of A’s words and joined him in the hunt. Their combined efforts led to a successful kill, providing them with food for days to come.

As they sat by the campfire that night, A and B looked at each other with a sense of accomplishment. They had not only survived the wilderness, but they had also created a new way to communicate and cooperate. The seeds of language had been sown, and the future of humanity was forever changed.


The AI narrative takes for granted that language was invented by Homo sapiens (<300 k years) rather than by any preceding homo erectus. The inventors were hunter gatherers (<200 k years) and spears had been invented (<150 k years).  A start for language around 100 – 200,000 years ago seems very plausible.


Language frees humans from the captivity of now

September 13, 2024

Introduction:

The capability for language (physical and mental) is physiological and lies in many of our genes acting together. These genes evolved individually for other reasons and came together, one assumes fortuitously rather than by design, to make language possible. Nevertheless our hominin ancestors only discovered they had this ability when, as need prescribed, they tried to communicate. The capability was discovered but languages themselves were invented by humans. As with most inventions, language invention was also driven by necessity. In this case it was social needs in the shape of the desire and the need to cooperate. Humans invented languages to answer the need to communicate. There is no doubt that language helps thinking but it is equally obvious that a man alone on a deserted island would not have invented language just for the sake of his solitary thinking. The driver for the invention was the need to communicate, not the need to think. So my contention is that the social desire/need to cooperate is what uniquely distinguished/distinguishes the human species. The discovery of language ability and the invention of languages was an evolutionary step change which lifted this distinction to an unprecedented level. But it didn’t stop there. An immensely powerful feedback loop was established when language aided thought which, in turn, enriched language. A virtuous spiral was born which has now been in place for some 200,000 years and continues. One apparent evolutionary weakness is that the language-thinking advances can only be passed on culturally and cannot be passed on (as far as we can tell) to new generations through our genes. So there is a risk of cultural discontinuity (as with the abrupt end of a civilization for example). I suspect that many such cultural discontinuities have occurred and manifested as Dark Ages which followed the end of some advanced civilizations. As, for example, after the heights of the Egyptian and Roman civilizations.

When were languages invented?

While there is no direct fossil evidence of language, the indirect evidence suggests that our ancestors from 200,000 years ago (most likely homo sapiens but possibly even Neanderthals) had language.

  • Brain Development: The human brain, particularly the areas associated with language, had undergone significant development by the time modern humans (homo sapiens) appeared some 200,000 years ago. This suggests that the species had the cognitive capacity for language at that time. It is not impossible that Neanderthals and perhaps even some later descendant of homo erectus also had the cognitive ability.
  • Social Complexity: Archaeological evidence indicates that early humans lived in complex social groups, which would have required effective cooperation and communication. The need and practice of social cooperation reached an unprecedented level with homo sapiens and possibly also with Neanderthals. It is most likely that social cooperation began with group hunting and group defense.
  • Symbolic Behavior: By 200,000 years ago humans engaged in tool making and some symbolism must have been required. Complex symbols in art are present, I estimate, by at least 70,000 years ago. The ability to construct abstract ideas and the need to communicate these also originate here. Again it seems that the capacity to use symbols was available to Neanderthals and sapiens.
  • Genetic Evidence: Recent genetic studies have identified genes associated with language development that are present in modern humans and likely existed in our ancestors 200,000 years ago.

It seems that the ability to have language was present earlier than 200,000 years ago. From the discovery of the ability to the invention of a functional language was a major breakthrough and would have taken some time. Primitive spoken languages began probably around 200,000 years ago. There are other major steps to get from early sounds and gestures to sophisticated spoken languages and then to the invention of writing (possibly via art). The origin of written languages probably dates to the time of the earliest known cave paintings around 50 – 60,000 years ago. 

Language is a tool for communication – not a lens

There is a philosophical argument (patently false but it keeps academics occupied) that language is a lens through which to view the world and that language determines reality. The Sapir-Whorf hypothesis (first proposed by Sapir in 1929) suggests that the structure of a language influences the way its speakers perceive and think about the world and that language is the lens through which the world is seen.

It is an academic hypothesis but it is more wishful thinking than based on empirical observation. I do not need language to observe the world but I do need language to describe what I observe. I need to describe the world because I wish to communicate. The idea of language primarily as a lens to view the world is fundamentally flawed. I find the Sapir-Whorf hypothesis unconvincing because the invention of a language – any language – must start with the need to communicate some aspect of reality. It is very plausible and even likely that this would have been to communicate some thing important for survival (e.g. Danger! Lion! Run! …..). Communication to cooperate was the human trademark. Whether for hunting or defense or building a settlement, the key was social cooperation enabled by communication.

Generally philosophers have – in their own winding ways – always returned to the idea of language being a communication tool rather than a lens to view reality. Chomsky’s theory of universal grammar suggests that humans possess an innate language faculty that allows us to acquire language with relative ease. The capability for language surely lies in our genes. But I am not so sure that grammar is also explicitly in our genes. But it could be so, since the software in our brains for what we consider logic and reason (and hence grammar) probably has dependence on our genes. Nevertheless, his theory implies that language is a tool that we use to represent the world, and where language and reality are interconnected, but that language does not determine reality.  Wittgenstein flip-flopped (which professional philosophers are allowed to do). In his early work, he argued that language was a picture of the world and that the meaning of a sentence was determined by its relationship to the world. Not quite Sapir-Whorf but close. However, in his later work, Wittgenstein shifted his focus to the role of language in human life and activity. He found that the meaning of words is not fixed but is determined by their use in specific contexts. Thus language was actually more a tool for various tasks than he had previously thought.

The Sapir-Whorf hypothesis derives from the philosophy of cultural relativism where empirical observations of reality are taken to be subordinate to our cultural understanding of the world. It is speculation rather than based on observation. This is more a political position than a true hypothesis which can be falsified. Certainly language is intricately entwined with thinking. But we can think without language. As a species we thought long before the chimps separated from homo. However, when language is applied, thinking is transformed and thought is elevated from roiling, diffuse, emotional clouds to crisper, clearer, delineation of ideas. This clarity of thought feeds back into new developments of language to be able to describe the new ideas and the abstractions that emerge. New language leads to even more nuanced and complex thoughts and so on ad infinitum. This loop from  language to thinking and back to language is probably the most powerful cultural feedback loop we have and is, I believe, a primary differentiator of humans from other animals.

It is language which provides a kind of digitization of the analogues of emotion. I look at the leaves of a tree in my garden and I can see more shades of green than I have words to describe. For my own contemplation of the tree I do not need to describe all the various shades of green or define any new words. Emotions and feelings and existing language will do. But if I wished to communicate some particular shade to someone else then I would use language to do so – either by inventing a new word or using other words as adjectives. It is my ability to discern many unnamed shades of green in the real world which demands the invention of new words. Reality is observed by my senses not through my language. I do not define a new colour and then go and find it in the real world. In any event it is reality which is reflected in language, not language which determines reality. The history of the evolution of words suggests that we invent new words as we observe or find new ideas – real or not – that we wish to communicate. The capability for language is physiology but languages are invented – always – for the purpose of communication.

We observe the world through our senses which both enable and restrict our observations. We use language to communicate what we observe. It was invented as a brush to paint the picture of what we observed and wished to communicate.. But language, we find, is a much more versatile tool than we set out to invent. It is not restricted to describe only the real world that we observe. As with any tool, we find language has many more uses than it was first designed for. 

Language frees humans from the captivity of now

From describing immediate and surrounding reality in the now, language progresses to describing actions. In the development of any language, first comes the naming of real things (nouns) and then come the actions we need to communicate (verbs). First things (you, me the tree, the dog, the lion, the sky, …) and then what to do (run, hide, come, go, fight, …). It is only then we discover that language can  even describe abstractions. Our minds are full of abstractions because of our ability to think not because we have language. Animals may remember the past and even take actions regarding the future but they do not, as far as we can tell, refer to past or future events. Language is what allows humans to address unreal events in the past and in the future. It is a short step from abstractions and unreal things to thoughts and imaginary things. That in turn led to the emergence of imagination and fantasy which provided the creative impetus for human development.

While language can influence our thinking, it does not determine it. Language allows us to explore concepts and experiences that may not be directly observable or verifiable, such as the past, the future, and fictional worlds. It is language which allows us to describe the abstract the unreal and the imaginary. The past and the future and all other things which do not exist cannot be observed but can be described. Thus language allows and enables all that human imagination and fantasy can conjure up. In reality, the real world itself is merely a tiny sub-set of what language can describe. Language’s ability to describe non-existent things, such as the past, future, and fiction, is an unlooked for facet of language that emerged from the initial need for communication. The ability of language to deal with the unreal and the imagined was a discovery not an invention.

It is language which makes it possible to lie. (The evolution of lying can also be traced back to the survival need for deception and thence the use of language to create and communicate falsehoods).  Language is a tool invented primarily for communication rather than for thinking. But the feedback loop between language and thinking has allowed the species to take off. Language allows us to paint and communicate not only our view of the real world but even our thoughts. Language is that which has enabled and empowered social groups and cooperation among humans.

The power of language lies in its ability to transcend the boundaries of direct experience and explore the realm of imagination. Language is not confined to reality. It could be argued that imagination, fantasy and creativity are enabled by language. Rather than language being a lens through which to view reality, the real need to communicate is what enabled language and freed humankind from the captivity of reality and the shackles of now.