Archive for the ‘Mathematics’ Category

The inherent logic of the universe – but not language – was established by the Big Bang

June 16, 2017

You could call this the First Law of Everything.

Logic is embedded in the universe.

At the Big Bang we have no idea what the prevailing laws were. Physicists merely call it a singularity where the known laws of physics did not apply. It was just another Creation Event. But thereafter – after the Big Bang – everything we observe in the universe is logical. We take logic to be inherent in the Universe around us. We discover facets of this embedded logic empirically and intuitively (and intuition is merely the synthesis of empiricism). We do not invent logic – we discover it. If logic was ever created it was created at the time of the Big Bang.

Language, on the other hand, is invented by man to describe and communicate the world around us. We build into the framework of our languages, rules of logic such that the use of language is consistent with the embedded logic of the universe. But language is not always equal to the task of describing the universe around us. “I have not the words to describe ….”. And then we imbue old words with new meanings or invent new words, or new grammar. But we never make changes which are not consistent with the logic of the universe.

Reasoning with language is then constrained to lie within the logical framework so constructed and therefore, also always consistent with our empirical observations of the universe around us. Given certain assumptions – as expressed by language – always lead to the same logical inferences – also as described by that language. Such inferring, or reasoning, works and – within our observable universe – is a powerful way of extrapolating from the known to the not-yet-known. The logical framework itself ensures that the inferences drawn remain consistent with the logic of the universe.

In the sentence “If A is bigger than B, and if B is bigger than C, then A is bigger than C”, it is the logic framework of the language which constrains if, then and bigger to have meanings which are consistent with what we can observe. The logic framework is not the grammar of the language. Grammar would allow me to say: “If A is bigger than B, and if B is bigger than C, then A is smaller/louder/faster/heavier than C”, but the embedded logic framework of the language is what makes it ridiculous. The validity of the reasoning or of inferring requires that the logic framework of the language not be infringed. “If A is bigger than B, and if B is bigger than C, then A is smaller than C” is grammatically correct but logically invalid (incorrect). However, the statement “If A is bigger than B, and if B is bigger than C, then A is heavier than C” is grammatically correct, logically invalid but not necessarily incorrect.

Mathematics (including Symbolic Logic) also contains many languages which provide a better means of describing facets of the universe which other languages cannot. But they all contain a logic framework consistent with the embedded logic of the universe. That 1 + 1 =2 is a discovery – not an invention. That 2H2 + O2 = 2H2O is also a discovery, not an invention. The rules for mathematical operations in the different branches of mathematics must always remain consistent with the embedded logic of the universe – even if the language invented has still to find actual application. Imaginary numbers and the square root of -1 were triggered by the needs of the electrical engineers. Set theory, however, was only used in physics and computing long after it was “invented”.

Languages (including mathematics) are invented but each must have a logical framework which itself is consistent with the inherent logic of the universe.


 

Number theory was probably more dependent upon live goats than on raindrops

June 14, 2017

It used to be called arithmetic but it sounds so much more modern and scientific when it is called number theory. It is the branch of mathematics which deals with the integers and the relationships between them. Its origins (whether one wants to call it a discovery or an invention) lie with the invention of counting itself. It is from where all the various branches of mathematics derive. The origin of counting can be said to be with the naming of the integers, and is intimately tied to the development of language and of writing and perhaps goes back some 50,000 years (since the oldest known tally sticks date from some 30,000 years ago).

How and why did the naming of the integers come about?  Why were they found necessary (necessity being the cause of the invention)? Integers are whole numbers, indivisible, complete in themselves. Integers don’t recognise a continuum between themselves. There are no partials allowed here. They are separate and discrete and number theory could as well be called quantum counting.

Quite possibly the need came from counting their livestock or their prey. If arithmetic took off in the fertile crescent it well may have been the need for trading their live goats among themselves (integral goats for integral numbers of wives or beads or whatever else they traded) which generated the need for counting integers. Counting would have come about to fit their empirical observations. Live goats rather than carcasses, I think, because a carcass can be cut into bits and is not quite so dependent upon integers.  Quanta of live goat, however, would not permit fractions. It might have been that they needed integers to count living people (number of children, number of wives …..) where fractions of a person were not politically correct.

The rules of arithmetic – the logic – could only be discovered after the integers had been named and counting could go forth. The commutative, associative and distributive properties of integers inevitably followed. And the rest is history.

But I wonder how mathematics would have developed if the need had been to count raindrops.

After all:

2 goats + 2 goats = 4 goats, and it then follows that

2 short people + 2 short people = 4 short people.

But if instead counting had been inspired by counting raindrops, they would have observed that

2 little raindrops + 2 little raindrops = 1 big raindrop.

They might then have concluded that

2 short people + 2 short people = one tall person

and history would then have been very different.


 

Counting was an invention

March 19, 2017

A new book is just out and it seems to be one I have to get. I am waiting to get hold of an electronic version.

Number concepts are a human invention―a tool, much like the wheel, developed and refined over millennia. Numbers allow us to grasp quantities precisely, but they are not innate. Recent research confirms that most specific quantities are not perceived in the absence of a number system. In fact, without the use of numbers, we cannot precisely grasp quantities greater than three; our minds can only estimate beyond this surprisingly minuscule limit.

Numbers fascinate me and especially how they came to be.

The earliest evidence we have of humans having counting ability are ancient tally sticks made of bone and dating up to 50,000 years ago. An ability to tally at least up to 55 is evident. One of the tally sticks may have been a form of lunar calendar. By this time apparently they had a well developed concept of time. And concepts of time lead immediately and inevitably to the identification of recurring time periods. By 50,000 years ago our ancestors counted days and months and probably years. Counting numbers of people would have been child’s play. They had clearly developed some sophistication not only in “numbering” by this time but had also progressed from sounds and gestures into speech.  They were well into the beginnings of language.

Marks on a tally stick tell us a great deal. The practice must have been developed in response to a need. Vocalisations – words – must have existed to describe the tally marks. These marks were inherently symbolic of something else. They are evidence of the ability to symbolise and to think in abstract terms. Perhaps they represented numbers of days or a count of cattle or of items of food or of number of people in the tribe. But their very existence suggests that the concept of ownership of property – by the individual or by the tribe – was already in place. Quite probably a system of trading with other tribes and protocols for such trade were also in place. At 50,000 years ago our ancestors were clearly on the threshold of using symbols not just on tally sticks or in cave paintings but in a general way and that would have been the start of developing a written language. …….

My time-line then becomes:

  • 8 million YBP           Human Chimpanzee divergence
  • 6 million YBP           Rudimentary counting among Archaic humans (1, 2, 3 many)
  • 2 million YBP           Stone tools
  • 600,000 YBP          Archaic Human – Neanderthal divergence
  • 400,000 YBP          Physiological and genetic capability for speech?
  • 150,000 YBP           Speech and counting develop together
  • 50,000   YBP           Verbal language, counting, trading, calendars in place (tally sticks)
  • 30,000   YBP           Beginnings of written language?
Clearly our counting is dominated by the base of 10 and our penchant for 12-based systems. The joints on the fingers of one hand allows us to count to 12 and that together with the five fingers of the other clearly led to our many 60-based counting systems.

I like 60. Equilaterals. Hexagons. Easy to divide by almost anything. Simple integers for halves, quarters, thirds, fifths, sixths, tenths, 12ths, 15ths and 30ths. 3600. 60Hz. Proportions pleasing to the eye. Recurring patterns. Harmonic. Harmony.

The origins of the use of base 60 are lost in the ancient past. By the time the Sumerians used it about 2,500 years ago it was already well established and continued through the Babylonians. But the origin lies much earlier. ……

Why then would the base 60 even come into being?

image sweet science

The answer, I think, still lies in one hand. Hunter-gatherers when required to count would prefer to use only one hand and they must – quite early on and quite often – have had the need for counting to numbers greater than five. And of course using the thumb as pointer one gets to 12 by reckoning up the 3 bones on each of the other 4 fingers. 

My great-grandmother used to count this way when checking the numbers of vegetables (onions, bananas, aubergines) bought by her maid at market. Counting up to 12 usually sufficed for this. When I was a little older, I remember my grandmother using both hands to check off bags of rice brought in from the fields – and of course with two hands she could get to 144. The counting of 12s most likely developed in parallel with counting in base 10 (5,10, 50, 100). The advantageous properties of 12 as a number were fortuitous rather than by intention. But certainly the advantages helped in the persistence of using 12 as a base. And so we still have a dozen (12) and a gross (12×12) and even a great gross (12x12x12) being used today. Possibly different groups of ancient man used one or other of the systems predominantly. But as groups met and mixed and warred or traded with each other the systems coalesced.

If we had 4 bones on each finger we would be using 5 x 16 = 80 rather than 60.


 

The sum of all wisdom by the mathematics of philosophy

November 4, 2016

The sum of all wisdom is the summation across the population, of the integrals over time of the second values derivative of knowledge.

the-sum-of-all-wisdom

Not the philosophy of mathematics but the mathematics of philosophy!


 

What is “statistically significant” is not necessarily significant

October 12, 2016

“Statistical significance” is “a mathematical machine for turning baloney into breakthroughs, and flukes into funding” – Robert Matthews.


Tests for statistical significance generating the p value are supposed to give the probability of the null hypothesis (that the observations are not a real effect and fall within the bounds of randomness). So a low p value only indicates that the null hypothesis has a low probability and therefore it is considered “statistically significant” that the observations do, in fact, describe a real effect. Quite arbitrarily it has become the custom to use 0.05 (5%) as the threshold p-value to distinguish between “statistically significant” or not. Why 5% has become the “holy number” which separates acceptance for publication and rejection, or success from failure is a little irrational. Actually what “statistically significant” means is that “the observations may or may not be a real effect but there is a low probability that they are entirely due to chance”.

Even when some observations are considered just “statistically significant” there is a 1:20 chance that they are not. Moreover it is conveniently forgotten that statistical significance is called for only when we don’t know. In a coin toss there is certainty (100% probability) that the outcome will be a heads or a tail or a “lands on its edge”. Thereafter to assign a probability to one of the only 3 outcomes possible can be helpful – but it is a probability constrained within the 100% certainty of the 3 outcomes. If a million people take part in a lottery, then the 1: 1,000,000 probability of a particular individual winning has significance because there is 100% certainty that one of them will win. But when conducting clinical tests for a new drug, it is often so that there is no certainty anywhere to provide a framework and a boundary within which to apply a probability.

A new article in Aeon by David Colquhoun, Professor of pharmacology at University College London and a Fellow of the Royal Society, addresses The Problem with p-values.

In 2005, the epidemiologist John Ioannidis at Stanford caused a storm when he wrote the paper ‘Why Most Published Research Findings Are False’,focusing on results in certain areas of biomedicine. He’s been vindicated by subsequent investigations. For example, a recent article found that repeating 100 different results in experimental psychology confirmed the original conclusions in only 38 per cent of cases. It’s probably at least as bad for brain-imaging studies and cognitive neuroscience. How can this happen?

The problem of how to distinguish a genuine observation from random chance is a very old one. It’s been debated for centuries by philosophers and, more fruitfully, by statisticians. It turns on the distinction between induction and deduction. Science is an exercise in inductive reasoning: we are making observations and trying to infer general rules from them. Induction can never be certain. In contrast, deductive reasoning is easier: you deduce what you would expect to observe if some general rule were true and then compare it with what you actually see. The problem is that, for a scientist, deductive arguments don’t directly answer the question that you want to ask.

What matters to a scientific observer is how often you’ll be wrong if you claim that an effect is real, rather than being merely random. That’s a question of induction, so it’s hard. In the early 20th century, it became the custom to avoid induction, by changing the question into one that used only deductive reasoning. In the 1920s, the statistician Ronald Fisher did this by advocating tests of statistical significance. These are wholly deductive and so sidestep the philosophical problems of induction.

Tests of statistical significance proceed by calculating the probability of making our observations (or the more extreme ones) if there were no real effect. This isn’t an assertion that there is no real effect, but rather a calculation of what wouldbe expected if there were no real effect. The postulate that there is no real effect is called the null hypothesis, and the probability is called the p-value. Clearly the smaller the p-value, the less plausible the null hypothesis, so the more likely it is that there is, in fact, a real effect. All you have to do is to decide how small the p-value must be before you declare that you’ve made a discovery. But that turns out to be very difficult.

The problem is that the p-value gives the right answer to the wrong question. What we really want to know is not the probability of the observations given a hypothesis about the existence of a real effect, but rather the probability that there is a real effect – that the hypothesis is true – given the observations. And that is a problem of induction.

Confusion between these two quite different probabilities lies at the heart of why p-values are so often misinterpreted. It’s called the error of the transposed conditional. Even quite respectable sources will tell you that the p-value is the probability that your observations occurred by chance. And that is plain wrong. …….

……. The problem of induction was solved, in principle, by the Reverend Thomas Bayes in the middle of the 18th century. He showed how to convert the probability of the observations given a hypothesis (the deductive problem) to what we actually want, the probability that the hypothesis is true given some observations (the inductive problem). But how to use his famous theorem in practice has been the subject of heated debate ever since. …….

……. For a start, it’s high time that we abandoned the well-worn term ‘statistically significant’. The cut-off of P < 0.05 that’s almost universal in biomedical sciences is entirely arbitrary – and, as we’ve seen, it’s quite inadequate as evidence for a real effect. Although it’s common to blame Fisher for the magic value of 0.05, in fact Fisher said, in 1926, that P= 0.05 was a ‘low standard of significance’ and that a scientific fact should be regarded as experimentally established only if repeating the experiment ‘rarely fails to give this level of significance’.

The ‘rarely fails’ bit, emphasised by Fisher 90 years ago, has been forgotten. A single experiment that gives P = 0.045 will get a ‘discovery’ published in the most glamorous journals. So it’s not fair to blame Fisher, but nonetheless there’s an uncomfortable amount of truth in what the physicist Robert Matthews at Aston University in Birmingham had to say in 1998: ‘The plain fact is that 70 years ago Ronald Fisher gave scientists a mathematical machine for turning baloney into breakthroughs, and flukes into funding. It is time to pull the plug.’ ………

Related: Demystifying the p-value


 

The origins of base 60

September 14, 2015

I like 60. Equilaterals. Hexagons. Easy to divide by almost anything. Simple integers for halves, quarters, thirds, fifths, sixths, tenths, 12ths, 15ths and 30ths. 3600. 60Hz. Proportions pleasing to the eye. Recurring patterns. Harmonic. Harmony.

The origins of the use of base 60 are lost in the ancient past. By the time the Sumerians used it about 2,500 years ago it was already well established and continued through the Babylonians. But the origin lies much earlier.

hand of 5I speculate that counting – in any form more complex than “one, two, many….” – probably goes back around 50,000 years. I have little doubt that the fingers of one hand were the first counting aids that were ever used, and that the base 10 given by two hands came to dominate. Why then would the base 60 even come into being?

The answer, I think, still lies in one hand. Hunter-gatherers when required to count would prefer to use only one hand and they must – quite early on and quite often – have had the need for counting to numbers greater than five. And of course using the thumb as pointer one gets to 12 by reckoning up the 3 bones on each of the other 4 fingers.

a hand of 12 - image sweetscience

a hand of 12 – image sweetscience

My great-grandmother used to count this way when checking the numbers of vegetables (onions, bananas, aubergines) bought by her maid at market. Counting up to 12 usually sufficed for this. When I was a little older, I remember my grandmother using both hands to check off bags of rice brought in from the fields – and of course with two hands she could get to 144. The counting of 12s most likely developed in parallel with counting in base 10 (5,10, 50, 100). The advantageous properties of 12 as a number were fortuitous rather than by intention. But certainly the advantages helped in the persistence of using 12 as a base. And so we still have a dozen (12) and a gross (12×12) and even a great gross (12x12x12) being used today. Possibly different groups of ancient man used one or other of the systems predominantly. But as groups met and mixed and warred or traded with each other the systems coalesced.

hands for 60

And then 60 becomes inevitable. Your hand of 5, with my hand of 12, gives the 60 which also persists into the present.  (There is one theory that 60 developed as 3 x 20, but I think finger counting and the 5 x 12 it leads to is far more compelling). But it is also fairly obvious that the use of 12 must be prevalent first before the 60 can appear. Though the use of 60 seconds and 60 minutes are all pervasive, it is worth noting that they can only come after each day and each night is divided into 12 hours.

While the use of base 10 and 12 probably came first with the need for counting generally and then for trade purposes (animals, skins, weapons, tools…..), the 12 and the 60 came together to dominate the measuring and reckoning of time. Twelve months to a year with 30 days to a month. Twelve hours to a day or a night and 60 parts to the hour and 60 parts to those minutes. There must have been a connection – in time as well as in the concepts of cycles – between the “invention” of the calendar and the geometrical properties of the circle. The number 12 has great significance in Hinduism, in Judaism, in Christianity and in Islam. The 12 Adityas, the 12 tribes of Israel, the 12 days of Christmas, the 12 Imams are just examples. My theory is that simple sun and moon-based religions gave way to more complex religions only after symbols and writing appeared and gave rise to symbolism.

Trying to construct a time-line is just speculation. But one nice thing about speculation is that the constraints of known facts are very loose and permit any story which fits. So I put the advent of numbers and counting at around 50,000 years ago first with base 10 and later with base 12. The combination of base 10 with base 12, I put at around 20,000 years ago when agricultural settlements were just beginning. The use of 60 must then coincide with the first structured, astronomical observations after the advent of writing and after the establishment of permanent, settlements. It is permanent settlements. I think, which allowed regular observations of cycles, which allowed specialisations and the development of symbols and religion and the privileged priesthood. That probably puts us at about 8 -10,000 years ago, as agriculture was also taking off, probably somewhere in the fertile crescent.

Wikipedia: The Egyptians since 2000 BC subdivided daytime and nighttime into twelve hours each, hence the seasonal variation of the length of their hours.

The Hellenistic astronomers Hipparchus (c. 150 BC) and Ptolemy (c. AD 150) subdivided the day into sixty parts (the sexagesimal system). They also used a mean hour(124 day); simple fractions of an hour (14, 23, etc.); and time-degrees (1360 day, equivalent to four modern minutes).

The Babylonians after 300 BC also subdivided the day using the sexagesimal system, and divided each subsequent subdivision by sixty: that is, by 160, by 160 of that, by 160of that, etc., to at least six places after the sexagesimal point – a precision equivalent to better than 2 microseconds. The Babylonians did not use the hour, but did use a double-hour lasting 120 modern minutes, a time-degree lasting four modern minutes, and a barleycorn lasting 313 modern seconds (the helek of the modern Hebrew calendar), but did not sexagesimally subdivide these smaller units of time. No sexagesimal unit of the day was ever used as an independent unit of time.

Today the use of 60 still predominates for time, for navigation and geometry. But generally only for units already defined in antiquity. A base of 10 is used for units found to be necessary in more recent times. Subdivision of a second of time or a second of arc is always using the decimal system rather than by the duodecimal or the sexagesimal system.

If we had six fingers on each hand the decimal system would never have seen the light of day. A millisecond would then be 1/ 1728th of a second. It is a good thing we don’t have 7 fingers on each hand, or – even worse – one hand with 6 fingers and one with 7. Arithmetic with a tridecimal system of base 13 does not entice me. But if I was saddled with 13 digits on my hands I would probably think differently.

 

Physics came first and then came chemistry and later biology

August 19, 2015

I generally take it that there are only 3 basic sciences, physics, chemistry and biology. I take logic to be the philosophical framework and the background for the observation of the universe. Mathematics is then not a science but a language by which the observations of the universe can be addressed. All other sciences are combinations or derivatives of the three basic sciences. Geology, astronomy, cosmology, psychology, sociology, archaeology, and all the rest derive from the basic three.

I was listening to a report today about some Japanese researchers  who generated protein building blocks by recreating impacts by comets containing water, amino acids and silicate. Some of the amino acids linked together to form peptides (chained molecules). Recurring lengths of peptide chains form proteins and that leads to life. What interested me though was the element of time.

Clearly “chemistry” had to exist before “biology” came into existence. Chemistry therefore not only comes first and “higher” in the hierarchy of the existence of things but is also a necessary, but insufficient, requirement for “biology” to exist. Chemistry plus some “spark” led to biology. In that case the basic sciences are reduced to two since biology derives from chemistry. I cannot conceive of biology preceding chemistry. The elements and atoms and molecules of chemistry had to exist before the “spark” of something brough biology into existence.

chemical reactions (chemistry) + “spark of life”(physics?) = biology

By the same token, does physics precede chemistry? I think it must. Without the universe existing (physics) and all the elements existing within it (which is also physics) and without all the forces acting upon the elements (still physics), there would be no chemistry to exist. Or perhaps the Big Bang was physics and the creation of the elements itself was chemistry? But considering that nuclear reactions (fusion or fission) and the creation of new elements are usually considered physics, it would seem that the existence of physics preceded the existence of chemistry. The mere existence of elements would be insufficient to set in motion reactions between the elements. Some other forces are necessary for that (though some of these forces are even necessary for the existence of the elements). Perhaps physics gives the fundamental particles (whatever they are) and then chemistry begins with the formation of elements? Whether chemistry starts with elements or with the fundamental particles, physics not only must rank higher as a science, it must have come first. Particles must first exist before they can react with each other.

Particles (physics) + forces (physics) = chemistry.

In any event, and by whatever route I follow, physics preceded chemistry, and physics must exist first for chemistry to come into being. That makes chemistry a derivative of physics as biology is a derivative of chemistry.

We are left with just one fundamental science – physics.

by elfbrazil wikipedia

Fifteenth, convex, tiling pentagon found

August 16, 2015

You cannot tile a floor only with regular, identical, convex pentagons.

PlusMaths: 

In 1918 the German mathematician Karl Reinhardt discovered five types of convex pentagon that can tile the plane. (The pentagons that belong to a particular type all share a common feature — see this paper for a description of the types.) Then there was a slow trickle of discoveries through the century, with Rolf Stein eventually bringing the number of types up to fourteen in 1985. (You can read more about the discoveries in Alex Bellos’ Guardian article.) And now, thirty years later, Casey Mann, Jennifer McLoud and David Von Derau of the University of Washington Bothell have announced that they have found another convex pentagon that can tile the plane:

New tiling pentagon

All the fifteen known, convex, tiling pentagons are shown below with the new one at the bottom right.

fifteen known tiling pentagons

fifteen known tiling pentagons

“Infinite” is of a lower order than “boundless”

May 23, 2015

In common usage, “boundless” is often used as a synonym for “infinite”. But of course the two words represent two quite different properties and, I think, are unnecessarily conflated: to the detriment of both language and understanding. I generally assume “infinite” to apply to quantifiable or countable (i.e capable of being counted) things, whereas I take  “boundless” to apply to both qualitative concepts and “countable” things.

“Infinite” is thus of a lower order than “boundless” since it can be applied only to the subset of “countable” things in the set of all things.

The infinite” is patently impossible since the application of the definite article can only apply to the finite. Of course, “the Infinite” is often used to describe “the divine” which only serves to illustrate the paradox inherent in divinity.

So I sometimes find the use of “infinite” as an adjective a little grating. A specific number is not “countable”, it is itself the “count”. So I find the use of “infinite numbers” or “infinite sets” somewhat misleading. Each and every number or set of numbers is – and has to be – finite. It is only the number of terms in the set which may be infinite. Each set once specified is fixed and distinct from any other set. It may contain an infinite number of terms but the set is finite. You could also say that such a set “extends to infinity” or that the set is “boundless”. The number of such sets can also be said to be infinite or boundless.

The distinction between boundless (or endless) and infinite is of no great significance except when the two properties need to be distinguished. For example, the Koch snowflake is an example of a set of lines of increasing length being drawn within a bounded space. It is only the length of the line – being quantifiable – which tends to the infinite with an infinite number of iterations. Note that every iteration only produces a line of finite length but the number of terms in the set is infinite.

Koch's snowflake

Koch’s snowflake – 4 iterations

The Koch curve has an tends to an infinite length because the total length of the curve increases by one third with each iteration. Each iteration creates four times as many line segments as in the previous iteration, with the length of each one being one-third the length of the segments in the previous stage. Hence the length of the curve after n iterations will be (4/3)n times the original triangle perimeter, which is unbounded as n tends to infinity.

I am told that the universe is expanding and may be infinite but bounded. Or it may be infinite and boundless. Or it may be finite and bounded. Whether the universe is infinite is a different question to whether it is bounded. In fact the term “infinite” can only be applied to some quantifiable property of the Universe (its mass, its diameter, its density, the number of stars or galaxies it contains …), whereas its boundedness can be applied to any qualitative or quantitative property. In one sense the universe where we assume that the fundamental laws of nature apply everywhere must be bounded – if nothing else – at least by the laws of nature that we discern.

Currently the thinking regarding the shape of the universe is:

NASA:

  • If space has negative curvature, there is insufficient mass to cause the expansion of the universe to stop. In such a case, the universe has no bounds, and will expand forever. This is called an open universe.
  • If space has no curvature (i.e, it is flat), there is exactly enough mass to cause the expansion to stop, but only after an infinite amount of time. Thus, the universe has no bounds and will also expand forever, but with the rate of expansion gradually approaching zero after an infinite amount of time. This is termed a flat universe or a Euclidian universe (because the usual geometry of non-curved surfaces that we learn in high school is called Euclidian geometry).
  • If space has positive curvature, there is more than enough mass to stop the present expansion of the universe. The universe in this case is not infinite, but it has no end (just as the area on the surface of a sphere is not infinite but there is no point on the sphere that could be called the “end”). The expansion will eventually stop and turn into a contraction. Thus, at some point in the future the galaxies will stop receding from each other and begin approaching each other as the universe collapses on itself. This is called a closed universe.

A universe with some infinite property in a bounded space only begs the question as to what lies in the space beyond the bounds. It also occurs to me that an endlessly expanding universe has to first assume that empty space – which should contain nothing – must actually contain the property of distance. That too is a bound, for if space did not even contain the property of distance, any expansion would be undefined. (And what does distance mean between two points in truly empty space?).

Imagination can be boundless – rather than infinite – and can even extend beyond the bounds of what we can perceive. In reality even our imaginations are often bounded by the limitations of our modes of expression of language and music and painting. Our emotions can be said to be boundless though they too are bounded by physiological limits.

A bounded universe of boundless infinities it would seem, rather than one of infinite infinities, and certainly not one of infinite boundlessnesses.

Mathematical images by Yeganeh

January 11, 2015
Yeganeh bird in flight

Yeganeh bird in flight

Hamid Naderi Yeganeh, “A Bird in Flight” (November 2014)

This image is like a bird in flight. It shows 2000 line segments. For each i=1, 2, 3, … , 2000 the endpoints of the i-th line segment are:
(3(sin(2πi/2000)^3), -cos(8πi/2000))
and
((3/2)(sin(2πi/2000)^3), (-1/2)cos(6πi/2000)).

See his gallery of images here.

Hamid Naderi Yeganeh is a Bachelor student of mathematics at the University of Qom. He won gold medal at the 38th Iranian Mathematical Society’s Competition (2014).

A Generalization of Wallis Product by Mahdi Ahmadinia and Hamid Naderi Yeganeh

PlusMaths writes:

…but it’s actually a collection of points in the plane given by a mathematical formula. To be precise, it’s a subset of the complex plane consisting of points of the form

  \[ \lambda A(t)+(1-\lambda )B(t), \]    

where

  \[ A(t)= 3(\sin (t))^{3}- \frac{3i}{4}\cos (4t) \]    

and

  \[ B(t)= \frac{3}{2}(\sin (t))^{5} - \frac{i}{2}\cos (3t) \]    

for $0\leq t \leq 2\pi $ and $0\leq \lambda \leq 1.$

The image was created by Hamid Naderi Yeganeh.


%d bloggers like this: