Archive for the ‘Mathematics’ Category

What is “statistically significant” is not necessarily significant

October 12, 2016

“Statistical significance” is “a mathematical machine for turning baloney into breakthroughs, and flukes into funding” – Robert Matthews.

Tests for statistical significance generating the p value are supposed to give the probability of the null hypothesis (that the observations are not a real effect and fall within the bounds of randomness). So a low p value only indicates that the null hypothesis has a low probability and therefore it is considered “statistically significant” that the observations do, in fact, describe a real effect. Quite arbitrarily it has become the custom to use 0.05 (5%) as the threshold p-value to distinguish between “statistically significant” or not. Why 5% has become the “holy number” which separates acceptance for publication and rejection, or success from failure is a little irrational. Actually what “statistically significant” means is that “the observations may or may not be a real effect but there is a low probability that they are entirely due to chance”.

Even when some observations are considered just “statistically significant” there is a 1:20 chance that they are not. Moreover it is conveniently forgotten that statistical significance is called for only when we don’t know. In a coin toss there is certainty (100% probability) that the outcome will be a heads or a tail or a “lands on its edge”. Thereafter to assign a probability to one of the only 3 outcomes possible can be helpful – but it is a probability constrained within the 100% certainty of the 3 outcomes. If a million people take part in a lottery, then the 1: 1,000,000 probability of a particular individual winning has significance because there is 100% certainty that one of them will win. But when conducting clinical tests for a new drug, it is often so that there is no certainty anywhere to provide a framework and a boundary within which to apply a probability.

A new article in Aeon by David Colquhoun, Professor of pharmacology at University College London and a Fellow of the Royal Society, addresses The Problem with p-values.

In 2005, the epidemiologist John Ioannidis at Stanford caused a storm when he wrote the paper ‘Why Most Published Research Findings Are False’,focusing on results in certain areas of biomedicine. He’s been vindicated by subsequent investigations. For example, a recent article found that repeating 100 different results in experimental psychology confirmed the original conclusions in only 38 per cent of cases. It’s probably at least as bad for brain-imaging studies and cognitive neuroscience. How can this happen?

The problem of how to distinguish a genuine observation from random chance is a very old one. It’s been debated for centuries by philosophers and, more fruitfully, by statisticians. It turns on the distinction between induction and deduction. Science is an exercise in inductive reasoning: we are making observations and trying to infer general rules from them. Induction can never be certain. In contrast, deductive reasoning is easier: you deduce what you would expect to observe if some general rule were true and then compare it with what you actually see. The problem is that, for a scientist, deductive arguments don’t directly answer the question that you want to ask.

What matters to a scientific observer is how often you’ll be wrong if you claim that an effect is real, rather than being merely random. That’s a question of induction, so it’s hard. In the early 20th century, it became the custom to avoid induction, by changing the question into one that used only deductive reasoning. In the 1920s, the statistician Ronald Fisher did this by advocating tests of statistical significance. These are wholly deductive and so sidestep the philosophical problems of induction.

Tests of statistical significance proceed by calculating the probability of making our observations (or the more extreme ones) if there were no real effect. This isn’t an assertion that there is no real effect, but rather a calculation of what wouldbe expected if there were no real effect. The postulate that there is no real effect is called the null hypothesis, and the probability is called the p-value. Clearly the smaller the p-value, the less plausible the null hypothesis, so the more likely it is that there is, in fact, a real effect. All you have to do is to decide how small the p-value must be before you declare that you’ve made a discovery. But that turns out to be very difficult.

The problem is that the p-value gives the right answer to the wrong question. What we really want to know is not the probability of the observations given a hypothesis about the existence of a real effect, but rather the probability that there is a real effect – that the hypothesis is true – given the observations. And that is a problem of induction.

Confusion between these two quite different probabilities lies at the heart of why p-values are so often misinterpreted. It’s called the error of the transposed conditional. Even quite respectable sources will tell you that the p-value is the probability that your observations occurred by chance. And that is plain wrong. …….

……. The problem of induction was solved, in principle, by the Reverend Thomas Bayes in the middle of the 18th century. He showed how to convert the probability of the observations given a hypothesis (the deductive problem) to what we actually want, the probability that the hypothesis is true given some observations (the inductive problem). But how to use his famous theorem in practice has been the subject of heated debate ever since. …….

……. For a start, it’s high time that we abandoned the well-worn term ‘statistically significant’. The cut-off of P < 0.05 that’s almost universal in biomedical sciences is entirely arbitrary – and, as we’ve seen, it’s quite inadequate as evidence for a real effect. Although it’s common to blame Fisher for the magic value of 0.05, in fact Fisher said, in 1926, that P= 0.05 was a ‘low standard of significance’ and that a scientific fact should be regarded as experimentally established only if repeating the experiment ‘rarely fails to give this level of significance’.

The ‘rarely fails’ bit, emphasised by Fisher 90 years ago, has been forgotten. A single experiment that gives P = 0.045 will get a ‘discovery’ published in the most glamorous journals. So it’s not fair to blame Fisher, but nonetheless there’s an uncomfortable amount of truth in what the physicist Robert Matthews at Aston University in Birmingham had to say in 1998: ‘The plain fact is that 70 years ago Ronald Fisher gave scientists a mathematical machine for turning baloney into breakthroughs, and flukes into funding. It is time to pull the plug.’ ………

Related: Demystifying the p-value


The origins of base 60

September 14, 2015

I like 60. Equilaterals. Hexagons. Easy to divide by almost anything. Simple integers for halves, quarters, thirds, fifths, sixths, tenths, 12ths, 15ths and 30ths. 3600. 60Hz. Proportions pleasing to the eye. Recurring patterns. Harmonic. Harmony.

The origins of the use of base 60 are lost in the ancient past. By the time the Sumerians used it about 2,500 years ago it was already well established and continued through the Babylonians. But the origin lies much earlier.

hand of 5I speculate that counting – in any form more complex than “one, two, many….” – probably goes back around 50,000 years. I have little doubt that the fingers of one hand were the first counting aids that were ever used, and that the base 10 given by two hands came to dominate. Why then would the base 60 even come into being?

The answer, I think, still lies in one hand. Hunter-gatherers when required to count would prefer to use only one hand and they must – quite early on and quite often – have had the need for counting to numbers greater than five. And of course using the thumb as pointer one gets to 12 by reckoning up the 3 bones on each of the other 4 fingers.

a hand of 12 - image sweetscience

a hand of 12 – image sweetscience

My great-grandmother used to count this way when checking the numbers of vegetables (onions, bananas, aubergines) bought by her maid at market. Counting up to 12 usually sufficed for this. When I was a little older, I remember my grandmother using both hands to check off bags of rice brought in from the fields – and of course with two hands she could get to 144. The counting of 12s most likely developed in parallel with counting in base 10 (5,10, 50, 100). The advantageous properties of 12 as a number were fortuitous rather than by intention. But certainly the advantages helped in the persistence of using 12 as a base. And so we still have a dozen (12) and a gross (12×12) and even a great gross (12x12x12) being used today. Possibly different groups of ancient man used one or other of the systems predominantly. But as groups met and mixed and warred or traded with each other the systems coalesced.

hands for 60

And then 60 becomes inevitable. Your hand of 5, with my hand of 12, gives the 60 which also persists into the present.  (There is one theory that 60 developed as 3 x 20, but I think finger counting and the 5 x 12 it leads to is far more compelling). But it is also fairly obvious that the use of 12 must be prevalent first before the 60 can appear. Though the use of 60 seconds and 60 minutes are all pervasive, it is worth noting that they can only come after each day and each night is divided into 12 hours.

While the use of base 10 and 12 probably came first with the need for counting generally and then for trade purposes (animals, skins, weapons, tools…..), the 12 and the 60 came together to dominate the measuring and reckoning of time. Twelve months to a year with 30 days to a month. Twelve hours to a day or a night and 60 parts to the hour and 60 parts to those minutes. There must have been a connection – in time as well as in the concepts of cycles – between the “invention” of the calendar and the geometrical properties of the circle. The number 12 has great significance in Hinduism, in Judaism, in Christianity and in Islam. The 12 Adityas, the 12 tribes of Israel, the 12 days of Christmas, the 12 Imams are just examples. My theory is that simple sun and moon-based religions gave way to more complex religions only after symbols and writing appeared and gave rise to symbolism.

Trying to construct a time-line is just speculation. But one nice thing about speculation is that the constraints of known facts are very loose and permit any story which fits. So I put the advent of numbers and counting at around 50,000 years ago first with base 10 and later with base 12. The combination of base 10 with base 12, I put at around 20,000 years ago when agricultural settlements were just beginning. The use of 60 must then coincide with the first structured, astronomical observations after the advent of writing and after the establishment of permanent, settlements. It is permanent settlements. I think, which allowed regular observations of cycles, which allowed specialisations and the development of symbols and religion and the privileged priesthood. That probably puts us at about 8 -10,000 years ago, as agriculture was also taking off, probably somewhere in the fertile crescent.

Wikipedia: The Egyptians since 2000 BC subdivided daytime and nighttime into twelve hours each, hence the seasonal variation of the length of their hours.

The Hellenistic astronomers Hipparchus (c. 150 BC) and Ptolemy (c. AD 150) subdivided the day into sixty parts (the sexagesimal system). They also used a mean hour(124 day); simple fractions of an hour (14, 23, etc.); and time-degrees (1360 day, equivalent to four modern minutes).

The Babylonians after 300 BC also subdivided the day using the sexagesimal system, and divided each subsequent subdivision by sixty: that is, by 160, by 160 of that, by 160of that, etc., to at least six places after the sexagesimal point – a precision equivalent to better than 2 microseconds. The Babylonians did not use the hour, but did use a double-hour lasting 120 modern minutes, a time-degree lasting four modern minutes, and a barleycorn lasting 313 modern seconds (the helek of the modern Hebrew calendar), but did not sexagesimally subdivide these smaller units of time. No sexagesimal unit of the day was ever used as an independent unit of time.

Today the use of 60 still predominates for time, for navigation and geometry. But generally only for units already defined in antiquity. A base of 10 is used for units found to be necessary in more recent times. Subdivision of a second of time or a second of arc is always using the decimal system rather than by the duodecimal or the sexagesimal system.

If we had six fingers on each hand the decimal system would never have seen the light of day. A millisecond would then be 1/ 1728th of a second. It is a good thing we don’t have 7 fingers on each hand, or – even worse – one hand with 6 fingers and one with 7. Arithmetic with a tridecimal system of base 13 does not entice me. But if I was saddled with 13 digits on my hands I would probably think differently.


Physics came first and then came chemistry and later biology

August 19, 2015

I generally take it that there are only 3 basic sciences, physics, chemistry and biology. I take logic to be the philosophical framework and the background for the observation of the universe. Mathematics is then not a science but a language by which the observations of the universe can be addressed. All other sciences are combinations or derivatives of the three basic sciences. Geology, astronomy, cosmology, psychology, sociology, archaeology, and all the rest derive from the basic three.

I was listening to a report today about some Japanese researchers  who generated protein building blocks by recreating impacts by comets containing water, amino acids and silicate. Some of the amino acids linked together to form peptides (chained molecules). Recurring lengths of peptide chains form proteins and that leads to life. What interested me though was the element of time.

Clearly “chemistry” had to exist before “biology” came into existence. Chemistry therefore not only comes first and “higher” in the hierarchy of the existence of things but is also a necessary, but insufficient, requirement for “biology” to exist. Chemistry plus some “spark” led to biology. In that case the basic sciences are reduced to two since biology derives from chemistry. I cannot conceive of biology preceding chemistry. The elements and atoms and molecules of chemistry had to exist before the “spark” of something brough biology into existence.

chemical reactions (chemistry) + “spark of life”(physics?) = biology

By the same token, does physics precede chemistry? I think it must. Without the universe existing (physics) and all the elements existing within it (which is also physics) and without all the forces acting upon the elements (still physics), there would be no chemistry to exist. Or perhaps the Big Bang was physics and the creation of the elements itself was chemistry? But considering that nuclear reactions (fusion or fission) and the creation of new elements are usually considered physics, it would seem that the existence of physics preceded the existence of chemistry. The mere existence of elements would be insufficient to set in motion reactions between the elements. Some other forces are necessary for that (though some of these forces are even necessary for the existence of the elements). Perhaps physics gives the fundamental particles (whatever they are) and then chemistry begins with the formation of elements? Whether chemistry starts with elements or with the fundamental particles, physics not only must rank higher as a science, it must have come first. Particles must first exist before they can react with each other.

Particles (physics) + forces (physics) = chemistry.

In any event, and by whatever route I follow, physics preceded chemistry, and physics must exist first for chemistry to come into being. That makes chemistry a derivative of physics as biology is a derivative of chemistry.

We are left with just one fundamental science – physics.

by elfbrazil wikipedia

Fifteenth, convex, tiling pentagon found

August 16, 2015

You cannot tile a floor only with regular, identical, convex pentagons.


In 1918 the German mathematician Karl Reinhardt discovered five types of convex pentagon that can tile the plane. (The pentagons that belong to a particular type all share a common feature — see this paper for a description of the types.) Then there was a slow trickle of discoveries through the century, with Rolf Stein eventually bringing the number of types up to fourteen in 1985. (You can read more about the discoveries in Alex Bellos’ Guardian article.) And now, thirty years later, Casey Mann, Jennifer McLoud and David Von Derau of the University of Washington Bothell have announced that they have found another convex pentagon that can tile the plane:

New tiling pentagon

All the fifteen known, convex, tiling pentagons are shown below with the new one at the bottom right.

fifteen known tiling pentagons

fifteen known tiling pentagons

“Infinite” is of a lower order than “boundless”

May 23, 2015

In common usage, “boundless” is often used as a synonym for “infinite”. But of course the two words represent two quite different properties and, I think, are unnecessarily conflated: to the detriment of both language and understanding. I generally assume “infinite” to apply to quantifiable or countable (i.e capable of being counted) things, whereas I take  “boundless” to apply to both qualitative concepts and “countable” things.

“Infinite” is thus of a lower order than “boundless” since it can be applied only to the subset of “countable” things in the set of all things.

The infinite” is patently impossible since the application of the definite article can only apply to the finite. Of course, “the Infinite” is often used to describe “the divine” which only serves to illustrate the paradox inherent in divinity.

So I sometimes find the use of “infinite” as an adjective a little grating. A specific number is not “countable”, it is itself the “count”. So I find the use of “infinite numbers” or “infinite sets” somewhat misleading. Each and every number or set of numbers is – and has to be – finite. It is only the number of terms in the set which may be infinite. Each set once specified is fixed and distinct from any other set. It may contain an infinite number of terms but the set is finite. You could also say that such a set “extends to infinity” or that the set is “boundless”. The number of such sets can also be said to be infinite or boundless.

The distinction between boundless (or endless) and infinite is of no great significance except when the two properties need to be distinguished. For example, the Koch snowflake is an example of a set of lines of increasing length being drawn within a bounded space. It is only the length of the line – being quantifiable – which tends to the infinite with an infinite number of iterations. Note that every iteration only produces a line of finite length but the number of terms in the set is infinite.

Koch's snowflake

Koch’s snowflake – 4 iterations

The Koch curve has an tends to an infinite length because the total length of the curve increases by one third with each iteration. Each iteration creates four times as many line segments as in the previous iteration, with the length of each one being one-third the length of the segments in the previous stage. Hence the length of the curve after n iterations will be (4/3)n times the original triangle perimeter, which is unbounded as n tends to infinity.

I am told that the universe is expanding and may be infinite but bounded. Or it may be infinite and boundless. Or it may be finite and bounded. Whether the universe is infinite is a different question to whether it is bounded. In fact the term “infinite” can only be applied to some quantifiable property of the Universe (its mass, its diameter, its density, the number of stars or galaxies it contains …), whereas its boundedness can be applied to any qualitative or quantitative property. In one sense the universe where we assume that the fundamental laws of nature apply everywhere must be bounded – if nothing else – at least by the laws of nature that we discern.

Currently the thinking regarding the shape of the universe is:


  • If space has negative curvature, there is insufficient mass to cause the expansion of the universe to stop. In such a case, the universe has no bounds, and will expand forever. This is called an open universe.
  • If space has no curvature (i.e, it is flat), there is exactly enough mass to cause the expansion to stop, but only after an infinite amount of time. Thus, the universe has no bounds and will also expand forever, but with the rate of expansion gradually approaching zero after an infinite amount of time. This is termed a flat universe or a Euclidian universe (because the usual geometry of non-curved surfaces that we learn in high school is called Euclidian geometry).
  • If space has positive curvature, there is more than enough mass to stop the present expansion of the universe. The universe in this case is not infinite, but it has no end (just as the area on the surface of a sphere is not infinite but there is no point on the sphere that could be called the “end”). The expansion will eventually stop and turn into a contraction. Thus, at some point in the future the galaxies will stop receding from each other and begin approaching each other as the universe collapses on itself. This is called a closed universe.

A universe with some infinite property in a bounded space only begs the question as to what lies in the space beyond the bounds. It also occurs to me that an endlessly expanding universe has to first assume that empty space – which should contain nothing – must actually contain the property of distance. That too is a bound, for if space did not even contain the property of distance, any expansion would be undefined. (And what does distance mean between two points in truly empty space?).

Imagination can be boundless – rather than infinite – and can even extend beyond the bounds of what we can perceive. In reality even our imaginations are often bounded by the limitations of our modes of expression of language and music and painting. Our emotions can be said to be boundless though they too are bounded by physiological limits.

A bounded universe of boundless infinities it would seem, rather than one of infinite infinities, and certainly not one of infinite boundlessnesses.

Mathematical images by Yeganeh

January 11, 2015
Yeganeh bird in flight

Yeganeh bird in flight

Hamid Naderi Yeganeh, “A Bird in Flight” (November 2014)

This image is like a bird in flight. It shows 2000 line segments. For each i=1, 2, 3, … , 2000 the endpoints of the i-th line segment are:
(3(sin(2πi/2000)^3), -cos(8πi/2000))
((3/2)(sin(2πi/2000)^3), (-1/2)cos(6πi/2000)).

See his gallery of images here.

Hamid Naderi Yeganeh is a Bachelor student of mathematics at the University of Qom. He won gold medal at the 38th Iranian Mathematical Society’s Competition (2014).

A Generalization of Wallis Product by Mahdi Ahmadinia and Hamid Naderi Yeganeh

PlusMaths writes:

…but it’s actually a collection of points in the plane given by a mathematical formula. To be precise, it’s a subset of the complex plane consisting of points of the form

  \[ \lambda A(t)+(1-\lambda )B(t), \]    


  \[ A(t)= 3(\sin (t))^{3}- \frac{3i}{4}\cos (4t) \]    


  \[ B(t)= \frac{3}{2}(\sin (t))^{5} - \frac{i}{2}\cos (3t) \]    

for $0\leq t \leq 2\pi $ and $0\leq \lambda \leq 1.$

The image was created by Hamid Naderi Yeganeh.

The certainty of the improbable

January 4, 2015

When you toss a coin, there is complete certainty that an event that is only 50% probable will occur. When you roll a dice there is absolute certainty that a 16.67% probable event will come to pass. It sounds trivial. After all the probability is only to distinguish between outcomes once it is certain that the coin will be tossed or that the dice will be rolled. Probability of an outcome is meaningless if the coin were not tossed or the dice not rolled. But note also that the different outcomes must be pre-defined. If you toss a silver coin in the air the return of a golden coin is not included among the pre-defined, possible outcomes. That a roll of the dice can result in a 9 is not “on the cards”.

Probability or improbability of an event or a causal relationship is meaningless unless the certainty of some more general event or relationship is certain. Moreover as soon as we define the event or relationship to which we allocate a probability, we also define that that event or relationship is permitted. It is certain that tomorrow will be another day. Only because it is certain can we consider the probability – or improbability – of what weather tomorrow might bring.  Suppose we define the weather as being either “good”, “bad” or “indifferent”. We can guess or calculate the probability of tomorrow’s weather exhibiting one of these 3 permitted outcomes. My point is that as soon as we define the improbable we also make it certain that the selected outcomes are all permitted. Then even the most improbable – but permitted – weather outcome will, on some day, occur. Not just permitted – but certain. If the improbable never happens then it is impossible – not improbable.

We use statistics and probabilities of occurrence because we don’t know the mechanisms which govern the outcome. If mechanisms were known in their entirety, we would just calculate the result – not the probability of a particular result. The very mention of a probability is always an admission of ignorance. It means that we cannot tell what makes something probable or improbable and even what we consider improbable will surely occur. An outcome of even very low probability will then – given sufficient total occurrence – certainly occur. The 2011 earthquake and tsunami off the Tōhoku coast was a one-in-a-1,000 year occurrence. The probability of it happening next year remains at one-in-a-thousand. But given another 1,000 years it will (almost) certainly happen again.

One of my concerns is that the use of statistics and probability – say in medical trials – is usually taken to imply knowledge, but it is actually an admission of ignorance. No doubt the use of statistics and probability help to constrain the boundaries of the ignorance, but the bottom line is that even the low probability risks will materialise. The very use of probabilities is always because of a lack of knowledge, because of ignorance.

In the beginning of December I was having a regular medical check-up and I was offered a flu-shot for the winter which I took. But I got to wondering why I did. The influenza vaccine is effective in about 50% of cases (i.e. 50% achieve protection). Around 5% – irrespective of whether they achieve protection or not – suffer some adverse reaction to the shot. Around 0.5% of the 5% (1:4,000 of total vaccinated) suffer a fatal reaction. In our little clinic perhaps 3,000 were vaccinated this winter. About 1,500 would have achieved protection and about 150 must have had some adverse reaction. Most likely one person has or will suffer a fatal reaction. I was just gambling that I would not be that one person. When some new drug is said to have a 1% chance of adverse effects it only means that it will certainly have adverse effects for 1 in a hundred cases. When that one person chooses to take that drug, he may be making the best medical choice possible – but it is the wrong choice.

A low risk for the multitude but a certainty for some. The chances of something improbable never happening are virtually zero.

Improbable – but certain.

How to drill a square hole

August 26, 2014
how to drill a square hole (via imgur)

how to drill a square hole (via imgur)

gif image from here

First woman ever among four awarded 2014 Fields medals

August 13, 2014

The Fields medal is the most prestigious award for mathematics and was first awarded in 1936. For 2014, four winners were announced this week and Maryam Mirzahkani, Professor of Mathematics at Stanford,  becomes the first woman ever to be awarded a Fields medal.

Though women are generally underrepresented in mathematics – I suspect partly because of a lack of interest and partly because it is not a “politically correct” occupation – there have been many prominent female mathematicians. But this is the first time in the almost 80 years since it was established that a woman has won the Fields medal.

The Fields Medal is awarded every four years on the occasion of the International Congress of Mathematicians to recognize outstanding mathematical achievement for existing work and for the promise of future achievement.

The Fields Medal Committee is chosen by the Executive Committee of the International Mathematical Union and is normally chaired by the IMU President. It is asked to choose at least two, with a strong preference for four, Fields Medallists, and to have regard in its choice to representing a diversity of mathematical fields. A candidate’s 40th birthday must not occur before January 1st of the year of the Congress at which the Fields Medals are awarded.

The Guardian:

Maryam Mirzakhani, a professor of mathematics at Stanford University in California, was named the first female winner of the Fields Medal – often described as the Nobel prize for mathematics – at a ceremony in Seoul on Wednesday morning.

The prize, worth 15,000 Canadian dollars, is awarded to exceptional talents under the age of 40 once every four years by the International Mathematical Union. Between two and four prizes are announced each time.

Three other researchers were named Fields Medal winners at the same ceremony in South Korea. They included Martin Hairer, a 38-year-old Austrian based at Warwick University in the UK; Manjul Bhargava, a 40-year old Canadian-American at Princeton University in the US and Artur Avila, 35, a Brazilian-French researcher at the Institute of Mathematics of Jussieu in Paris.

There have been 55 Fields medallists since the prize was first awarded in 1936, including this year’s winners. The Russian mathematician Grigori Perelman refused the prize in 2006 for his proof of the Poincaré conjecture.

The citations for the four winners:

Mystical threes and magic scaling number of the Efimov State

June 3, 2014

The number three has long been attributed with mystical and divine properties.

trinityTime and Life itself is a matter of threes. Birth, life and death. The past, the present and the future. Third time lucky. Three wishes. The Holy Trinity. Three daughters. The Good, the Bad and the Ugly. The three primary colours. A Troika. Brahma,Vishnu, Shiva. The Creator, the Preserver, the Destroyer. Three monkeys. Three wise men. Three Kings.

Three has its place in Physics as well. Pascal’s triangle and the Golden Number and the Fibonacci series. A theoretical prediction that fundamental particles in sets of three give rise to stable arrangements of infinitely scaleable, nesting sets has now been shown to be real – the Efimov State.

WiredMore than 40 years after a Soviet nuclear physicist proposed an outlandish theory that trios of particles can arrange themselves in an infinite nesting-doll configuration, experimentalists have reported strong evidence that this bizarre state of matter is real. 

n 1970, Vitaly Efimov was manipulating the equations of quantum mechanics in an attempt to calculate the behavior of sets of three particles, such as the protons and neutrons that populate atomic nuclei, when he discovered a law that pertained not only to nuclear ingredients but also, under the right conditions, to any trio of particles in nature.

While most forces act between pairs, such as the north and south poles of a magnet or a planet and its sun, Efimov identified an effect that requires three components to spring into action. Together, the components form a state of matter similar to Borromean rings, an ancient symbol of three interconnected circles in which no two are directly linked. The so-called Efimov “trimer” could consist of a trio of protons, a triatomic molecule or any other set of three particles, as long as their properties were tuned to the right values. And in a surprising flourish, this hypothetical state of matter exhibited an unheard-of feature: the ability to range in size from practically infinitesimal to infinite. 

Efimov had shown that when three particles come together, a special confluence of their forces creates the Borromean rings effect: Though one is not enough, the effects of two particles can conspire to bind a third. The nesting-doll feature — called discrete scale invariance — arose from a symmetry in the equation describing the forces between three particles. If the particles satisfied the equation when spaced a certain distance apart, then the same particles spaced 22.7 times farther apart were also a solution. This number, called a “scaling factor,” emerged from the mathematics as inexplicably as pi, the ratio between a circle’s circumference and diameter.

Now it seems 3 different research teams have shown the existence of Efimov nesting.

“With just one example, it’s very difficult to tell if it’s a Russian nesting doll,” said Cheng Chin, a professor of physics at the University of Chicago who was part of Grimm’s group in 2006. The ultimate proof would be an observation of consecutive Efimov trimers, each enlarged by a factor of 22.7. “That initiated a new race” to prove the theory, Chin said. 

Eight years later, the competition to observe a series of Efimov states has ended in a photo finish. “What you see is three groups, in three different countries, reporting these multiple Efimov states all within about one month,” said Chin, who led one of the groups. “It’s totally amazing.”

Read the article.

Related: Physicists Prove Surprising Rule of Threes

%d bloggers like this: