Archive for the ‘Science’ Category

Creation Myths

December 7, 2018

Religions have no answer to the question and merely invoke God (or a god among a a pantheon of appropriate gods). Science has no answer either. No physicist or astrophysicist or cosmologist actually has the faintest idea about where energy and matter came from. The disingenuous claim that a smooth nothing suddenly and spontaneously produced clumps of matter and anti-matter (such that the total remained nothing) is just as far-fetched as any creation myth. Energy is handled similarly. The otherwise homogeneous universe is allowed to have clumps of “something” provided that they can be neutralised by equivalent amounts of “negative somethings”. The Big Bang is just a label for a Big Unknown.

Atheists, of course, don’t even try to answer the question. They are satisfied to say that the answer is unknown but they do know that it is not any kind of conception of God.

The other Big Question is : “How did life begin?”

Religions again have no answer and invoke God or gods. Science has no answer either and puts it down to random chemistry which became biochemistry and which, by accident, led to life.

Neither science nor religions nor philosophy have the faintest idea of what time is.

It’s all just Magic.


 

Advertisements

The limits of science

September 25, 2018
  1. Reality is limited to what is detectable by human senses (and the instruments which extend our senses). What cannot be detected is assumed – but cannot be proven – not to exist. Science is limited to what is known to exist and what is unknown but assumed to be knowable. Science has no means to address what is unknown to be knowable.
  2. Time and causality. Science and its methods rely on causality which in turn relies on the existence of and the passage of time. But what time is and what actually passes, is unknown (being unknowable). Science cannot reach where causality does not exist.
  3. Boundary conditions. There is no branch of science (or philosophy) which does not rely on fundamental assumptions which are taken to be self-evident truths. But these assumptions cannot be proved and they cannot explain their own existence. Science and the methods of science cannot address anything outside their self-established boundary limits.
  4. Even the most fundamental and simple mathematics cannot prove its own axioms (Gödels Incompleteness Theorems). Science cannot address areas outside of the assumptions of mathematics.
  5. Value judgements are invisible to science. The appreciation of any art or music or even literature are not subject to logic or causality or any science. Even the beauty seen by some in mathematics or cosmology or biology is not amenable to scientific analysis. Moral or ethical judgements are beyond the capabilities of science.
  6. The existence of life is self-evident and inexplicable. It is a boundary condition where science has no explanation for why the boundary exists. To call the beginning of life a “random event” is a statement steeped in just as much ignorance as attributing it to a Creator.
  7. The boundaries of consciousness are neither known or understood. The perceptions of a consciousness of the surrounding universe define the universe. The perceptions form an impenetrable barrier beyond which science and the methods of science cannot reach.
  8. Fitch’s Knowability Paradox is sufficient to show the reality of the existence of the unknowable. Neither science nor philosophy or language or mathematics has the wherewithal to say anything about the unknowable. They have no light to shine in this area. An X-ray image cannot be seen in normal light.

Science is utterly dependent upon causality.

So is Determinism, where Determinism is the philosophical theory that all events, including moral choices, are completely determined by previously existing causes. Determinism can never look beyond or resolve the First Cause problem. Of course determinism falls immediately at the hurdle of infinite knowledge being knowable but proponents would counter that “unknown” does not invalidate causality. The First Cause is then merely shunted into the unknown – but knowable. Determinism would claim, by causality and the laws of the universe, that all that was unknown could potentially be known. Equally every event of the past could be traced back through causal relations and be knowable. In fact, determinism which shuns the need for religions and gods, actually claims the existence of Omniscience. More than that, determinism requires that omniscience be possible. Determinism is absurd. “There is no God but Omniscience must be possible”. Reductio ad absurdum.

Causality, determinism and science are all prisoners of, and restricted to, the knowable.


 

When the waves of determinism break against the rocks of the unknowable

August 21, 2018

It is physics versus philosophy.

Causal determinism states that every event is necessitated by antecedent events and conditions together with the laws of nature.

Determinism, unlike fatalism, does not require that future events be destined to occur no matter what the past and current events are. It only states that every future event that does occur, is an inevitable consequence of what has gone before and of the natural laws. However inevitability does not mean – and does not need to mean – predictability by the human mind. It should be noted though that the existence of a specific causality does not by itself imply a general determinism extending across all space and all time. A general and absolute determinism is also not a necessary condition for applying the scientific method though it could easily be taken to be so. The scientific method does require determinism but only to the extent that causality applies within the observable range of empirical observations. But it is also therefore unavoidable that the scientific method can only discover causal connections. The scientific method, in itself, rejects the existence of, and is therefore incapable of detecting, non-causal connections.

Most physicists would claim that determinism prevails. (Some of them may concede compatibilism but that is just a subterfuge to allow determinism to coexist with free will). Determinism claims that causality is supreme; that the laws of nature (whether or not they are known to the mind of man) prevail in the universe such that whatever is occurring is caused by, and is a consequence of, what came before. And whatever will happen in the future is caused by what has occurred before and what is occurring now. Absolute determinism allows of no free will. It can not. Clearly determinism allows of no gods or magic either. For determinism to apply it does not require that all knowledge is known or that the natural laws have all been discovered. It does however require that everything is knowable. If the unknowable exists then not everything can be determined. It also requires that all natural laws be self-explanatory in themselves. For the physicist, even the uncertainty at the quantum level does not invalidate determinism because this uncertainty, they say, is not random but is probabilistic. Even the weirdness brought about by quantum loop gravity theories do not, it is thought, invalidate the concept of determinism. Here the laws of nature and time and space themselves are emergent. They emerge from deeper, underlying “laws” and emerge as what we perceive as space and time and the 4 laws of nature. Where those underlying “laws” or rules or random excitations come from, or why, are, however, undefined and – more importantly – undefinable.

The concept that the universe is a zero-sum game, when taking the universe as a whole, does not take us any further. The concept postulates that the universe – taken globally – is a big nothing. Zero energy and zero charge globally but with locally “lumpy” conditions to set off the Big Bang. Some positive energies and some energy consumption such that the total is zero.  I find this unsatisfactory in that the concepts of the universe being homogeneous and isotropic are then a function of scale (space) and of time. Allowing for local lumpiness to exist but which averages to a globally smooth zero, seems far too contrived and convenient.

The problem caused by the acceptance of determinism, and of the consequent denial of the possibility of free will, is that all events are then inevitable and a natural consequence of what happened before. Choice becomes illusory. Behaviour is pre-ordained. In fact all thought and even consciousness itself must be an inevitable consequence of what went before. There can no longer be any moral responsibility attached to any behaviour or any actions (whether by humans or inanimate matter). It is argued that morality is irrelevant for physics. They are different domains. There is no equation for morality because it is not a law of nature. It is merely an emergent thing. Morality, for the physicist, just like consciousness and thought and behaviour, merely emerge from the laws of nature. This is not incorrect in itself, I think, but they are different domains because the laws of nature – as we know them – are incomplete in that they can neither explain themselves or morality.

For the physicist the natural laws apply everywhere and everywhen. Except, they admit, at or before (if there was a before) the Big Bang Singularity. They apply across the universe except that the universe cannot be defined. It is disingenuous to merely claim that the universe expands into nothingness and both creates and defines itself. The natural laws are said to apply across all of time except that time (not to be confused with the passage of time) is not defined. The nature of time is unknown and probably unknowable. What is it that passes? Quantum loop gravity enthusiasts would claim that time is merely a perception and that causality is an illusion. All events throughout space and time, they would say, occur/have occurred simultaneously. We merely connect certain events in our perceptions such that time and causality emerge. But this is no different than invoking magic. It seems to me that the gaping hole in the determinism charter is that there is no reason (known) for the natural laws to exist. Above all, the natural laws cannot explain themselves. I would claim that this lies in the unknowable. Determinism would have us accept that all biological and neural (and therefore cognitive) processes are merely events that are caused by antecedent events and natural law. Except that while natural laws are observed and experienced empirically, they cannot (and probably never will be able to) explain themselves.

And this is where determinism crashes. The four natural laws (gravity, electromagnetism, the strong nuclear force and the weak nuclear force) that we treat as being fundamental are not self-explanatory. They just are. They do not within themselves explain why they must exist. Maybe there is a Theory of Everything capable of explaining itself and everything in the world. Or maybe there isn’t. The natural laws cannot explain why there are 4 fundamental laws and not 5 or 6, or why there are just 12 (or 57) fundamental particles, or why there is a particle/wave duality or why undetectable dark energy or dark matter exist (except as fudge factors).The natural laws, as we know them today, cannot explain why life began (or why life had to begin to satisfy determinism), cannot explain what consciousness is and cannot explain why thoughts and behaviour must be inevitable consequences of antecedent events.  As a practical matter we have no inkling as to which antecedent events cause which cognitive events and following which laws. It is very likely that this is theoretically impossible as well. Some of these explanations may well lie in the realm of the unknowable. I draw the analogy with Gödel’s Incompleteness Theorems:

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of the natural numbers. For any such formal system, there will always be statements about the natural numbers that are true, but that are unprovable within the system.

The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

We cannot from within and as part of the universe demonstrate why the axioms used by physics must be. Empiricism gives us what we perceive to be the laws of nature. Empiricism also gives us our perceptions of consciousness and thought and free will. And these contradict one another. The resolution of the contradictions lives in the unknowable.

Empiricism can only go so far. It cannot reach the parts that empiricism cannot reach. Determinism cannot extend to places where the natural laws cannot or do not reach. If the unknowable exists then determinism cannot reach there. For the natural laws may not (or can not) apply there. It is not about whether we know all there is to be known about natural law. Determinism requires that some consistent and self-explanatory natural laws apply everywhere and at all times.

Absolute Determinism requires that Natural Laws be complete. That requires that natural laws be able to:

  1. explain their own existence, and
  2. explain all events (material and immaterial), and
  3. apply within and beyond our perception of the universe, and
  4. apply within and beyond our perception of time,

And the existence of such a set of Natural Laws is unknowable.


 

Related:

https://ktwop.com/2017/07/22/known-unknown-and-unknowable/

https://ktwop.com/2017/09/22/the-unknowable-is-neither-true-nor-false/

https://whyevolutionistrue.wordpress.com/2016/01/11/plain-talk-about-free-will-from-a-physicist-stop-claiming-you-have-it/

https://en.wikipedia.org/wiki/Determinism

http://physicsandthemind.blogspot.com/2013/10/in-defense-of-libertarian-free-will.html

http://backreaction.blogspot.com/2016/01/free-will-is-dead-lets-bury-it.html?spref=tw

http://archive.boston.com/bostonglobe/ideas/brainiac/2011/06/the_big_nothing.html

https://plato.stanford.edu/entries/determinism-causal/


 

Will recognition of “fake news” be followed by “fake science”

November 3, 2017

Collins Dictionary has chosen “fake news” as its word for 2017.

When a partisan publication exaggerates – even wildly – in favour of its own cause, it causes no great surprise.  It is not even too astonishing when it fabricates news or omits news to further its own agenda. The insidious nature of “fake news” is worst when it is a supposedly objective publication which indulges in fake news to further a hidden agenda. So when Breitbart or the Daily Mail or Huffington Post produce much of their nonsense it causes no great surprise and hardly merits the sobriquet of “fake news”, even if much of the “news” is slanted or exaggerated or skewed or just plain lies. It is when a publication, having a reputation for objectivity, misuses that reputation to push its own agenda, that “fake news” takes on a life of its own.

It is not that this is anything new but certainly the US Presidential Election has brought “fake news” to a head. “Fake News” applies though to much more than just US politics. Of course CNN heads the list of purveyors of “fake news”. CNN has never been objective but they once generally checked their facts and used to separate straight reporting from opinion. I used to find them, at least, fairly reliable for factual reporting. But they have abandoned that approach and I find that they not just unreliable but also intentionally misleading. Their “journalists” have all become lobbyists and “CNN” has become synonymous with “Fake News”.

I once was a regular reader of the Washington Post. They were biased but were not unreliable as to the facts. It was quite easy to just discount for bias and get what I thought was a “true” picture. But they, too, have degenerated swiftly in the last 2 years. Stories are not just distorted, they are even fabricated. But the real disappointments for me in the last 24 months has been the New York Times. Not just in the space of US politics. The NYT has its own definitions of what is politically correct in politics, in science and even in the arts. Somewhere along the way they have made a conscious decision that they are “lobbyists” rather than reporters. They have decided that, for what they have defined as being “politically correct”, pushing that view justifies omission, exaggeration, “spinning” and even fabrication. Straight reporting has become extinct.

Lobby groups such as Huff Post and Daily Kos and Red State are full of blatant falsifications but have no news reputation of any significance at stake. They are not, therefore, included in my take on the top purveyors of fake new.

If 2017 has seen the recognition of the widespread use of fake news, I am looking to 2018 to recognise the proliferation of fake science. There is fake science being disseminated every day in big physics (CERN funding), pharmaceuticals, “climate science”, behavioural studies, sociology, psychology and economics. Much of fake science follows funding. Perhaps there will be greater recognition that “good science” is neither decided by nor subject to a poll.


 

 

Science (and the gods) rely equally on magic

July 3, 2017

The fundamental assumptions of science can be written in various ways but, for me, seem to boil down to four:

  1. The Universe exists
  2. Laws of nature (science) exist
  3. All phenomena are constrained to obey the laws of nature (science)
  4. The laws of nature (science) apply everywhere in the universe

The laws of nature are such that compliance with these laws is inbuilt. If there is any non-compliance it is not a law of nature. If compliance is all that we observe then it is a law of nature. But why the laws are what they are are usually beyond explanation.

Assumptions are not amenable to further question. You could apply an “if” to them or question “why” the assumption is true, but that is futile for there are no answers. They are just taken as self-evident and the starting point of rational thought. They are never, in themselves, self-explanatory except in the trivial form. (Assume that 1+1=2. Therefore 2+2=4 and that proves that 1+1=2).

I apply the word “magic” to all that is inexplicable. And all the fundamental laws of nature (science) are built on a foundation of inexplicable magic. How many fundamental particles exist and why? It’s magic. If the laws of science only apply after the Big Bang but don’t apply at the Big Bang singularity itself, what laws did? It’s magic. If the laws apply to a supernova but not inside a black hole, it’s magic. (Never mind that a black hole seems to be a part of the universe where the laws of science do not apply which violates the assumption that the universe is homogeneous and isotropic (Assumption 4 above). Why are there 4 – and only 4 – fundamental forces in nature? It’s magic. How did time begin? It’s magic. Can empty space exist without even the property of dimensions? It’s magic. Can time be a dimension and not have negative values? It’s magic. Dark energy and dark matter are merely labels invoking magic. All science which relies on fundamental assumptions is ultimately built upon and dependent upon a set of inexplicable, fundamental statements. They are just magic.

A fundamental flaw with the claim of physics, that all of history up to just after the Big Bang is explainable by the laws of science, must also mean that all of the future is also fixed and determined by the laws of science applied to conditions now. What will happen was therefore fixed for all time by the Big Bang itself. And that, too, is indistinguishable from magic.

Religions do not just rely on magic, they claim the magic for their gods. Modern, “with-it” religions, which try to be “compatible” with the latest knowledge discovered by science, merely claim that their God(s) pushed the button which caused the Big Bang. That my God is greater than your God is magic. That there is a life after death, or reincarnation, or rebirth or an ultimate state of grace is also just magic.

Shiva, Kali, Jesus, Allah, nirvana, dark energy, dark matter and the Big Bang singularity are all labels for different facets of magic.

Magic, by any other name, is just as inexplicable.


 

Without first having religions, atheism and agnosticism cannot exist

June 27, 2017

I take science to be the process by which areas of ignorance are explored, illuminated and then shifted into our space of knowledge. One can believe that the scientific method is powerful enough to answer all questions – eventually – by the use of our cognitive abilities. But it is nonsense to believe that science is, in itself, the answer to all questions. As the perimeter surrounding human knowledge increases, what we know that we don’t know, also increases. There is what we know and at the perimeter of what we know, lies what we don’t know. Beyond that lies the boundless space of ignorance where we don’t know what we don’t know.

Religions generally use a belief in the concept of a god (or gods) as their central tenet. By definition this is within the space of ignorance (which is where all belief lives). For some individuals the belief may be so strong that they claim it to be “personal knowledge” rather than a belief. It remains a belief though, since it cannot be proven. Buddhism takes a belief in gods to be unnecessary but – also within the space of ignorance – believes in rebirth (not reincarnation) and the “infinite” (nirvana). Atheism is just as much in the space of ignorance since it is based on the beliefs that no gods or deities or the supernatural do exist. Such beliefs can only come into being as a reaction to others having a belief in gods or deities or the supernatural. But denial of a non-belief cannot rationally be meaningful. If religions and their belief in gods or the supernatural did not first exist, atheism would be meaningless. Atheism merely replaces a belief in a God to a belief in a Not-God.

I take the blind worship of “science” also to be a religion in the space of ignorance. All physicists and cosmologists who believe in the Big Bang singularity, effectively believe in an incomprehensible and unexplainable Creation Event. Physicists who believe in dark matter or dark energy, as mysterious things, vested with just the right properties to bring their theories into compliance with observations of an apparently expanding universe, are effectively invoking magic. When modern physics claims that there are 57 fundamental particles but has no explanation as to why there should be just 57 (for now) or 59 or 107 fundamental particles, they take recourse to magical events at the beginning of time. Why there should be four fundamental forces in our universe (magnetism, gravitation, strong force and weak force), and not two or three or seven is also unknown and magical.

Agnosticism is just a reaction to the belief in gods. Whereas atheists deny the belief, agnostics merely state that such beliefs can neither be proved or disproved; that the existence of gods or the supernatural is unknowable. But by recognising limits to what humans can know, agnosticism inherently accepts that understanding the universe lies on a “higher” dimension than what human intelligence and cognitive abilities can cope with. That is tantamount to a belief in “magic” where “magic” covers all things that happen or exist but which we cannot explain. Where atheism denies the answers of others, agnosticism declines to address the questions.

The Big Bang singularity, God(s), Nirvana and the names of all the various deities are all merely labels for things we don’t know in the space of what we don’t know, that we don’t know. They are all labels for different kinds of magic.

I am not sure where that leaves me. I follow no religion. I believe in the scientific method as a process but find the “religion of science” too self-righteous and too glib about its own beliefs in the space of ignorance. I find atheism is mentally lazy and too negative. It is just a denial of the beliefs of others. It does not itself address the unanswerable questions. It merely tears down the unsatisfactory answers of others. Agnosticism is a cop-out. It satisfies itself by saying the questions are too hard for us to ever answer and it is not worthwhile to try.

I suppose I just believe in Magic – but that too is just a label in the space of ignorance.


 

Microplastic misconduct: Swedish paper about fish larvae eating microplastics was fabricated

April 28, 2017

A paper claiming evidence about fish larvae eating micro-plastics to their detriment was fabricated. To be published, any paper about the impact of humans on the environment must always be negative. Exaggerated and even fabricated data are rarely questioned. Studies which are positive about human impact are – by definition of “political correctness” – never publishable. There is clearly “politically incorrect” and “politically correct” science.

This is another case of made up work being passed off as “politically correct”science.

Swedish Radio reports today that

A study about fish larvae eating micro-plastics contains such serious flaws that it should be retracted from Science, where it was published says Sweden’s Central Ethics Review Board’s expert panel for misconduct in research.

The panel believes that two researchers at Uppsala University are guilty of misconduct. It is a remarkable study from last year, which deals with the fact that perch young seem to prefer to eat micro-plastics to regular fish food.

After criticism by external researchers, an investigation was made by the Central Ethics Review Board, which today delivered an opinion. The researchers have been found guilty of misconduct in several cases.

“The most serious is the lack of original data,” says Jörgen Svidén, Department Head at the Central Ethics Review Board.

The study was published in the journal Science last year. The Central Ethics Review Board writes in its opinion that it is remarkable that the article was ever accepted. The opinion has been sent to Uppsala University, which must now make a decision on the matter. 

The researcher’s claimed that a laptop containing the data had been stolen. Really? And this was not backed up? Uppsala University had rejected claims of misconduct by its staff in the wake of serious allegations in 2015. How gullible can a University be?

ScienceMag wrote then:

When Fredrik Jutfelt and Josefin Sundin read a paper on a hot environmental issue in the 3 June issue of Science, the two researchers immediately felt that something was very wrong. Both knew Oona Lönnstedt,  the research fellow at Sweden’s Uppsala University (UU) who had conducted the study, and both had been at the Ar research station on the island of Gotland around the time that Lönnstedt says she carried out the experiments, which showed that tiny particles called microplastics can harm fish larvae. Jutfelt, an associate professor at the Norwegian University of Science and Technology in Trondheim, and Sundin, a UU postdoc, believed there was no way that Lönnstedt had been able to carry out the elaborate study.

Less than 3 weeks later, the duo wrote UU that they had “a strong suspicion of research misconduct” and asked for an investigation. Their letter, initially reported by Retraction Watch in August, was cosigned by five scientists from Canada, Switzerland, and Australia, who hadn’t been at the research station but also had severe misgivings about the paper and who helped Sundin and Jutfelt build their case. 

This week, Science is publishing an “Editorial expression of concern” about the paper, because Lönnstedt and her supervisor at UU, Peter Eklöv, have been unable to produce all of the raw data behind their results. Lönnstedt says the data were stored on a laptop computer that was stolen from her husband’s car 10 days after the paper was published, and that no backups exist. ……

…… The paper, which received a lot of press attention, focused on plastic fragments of less than half a millimeter in size that result from the mechanical breakdown of bags and other products. There’s increasing evidence that these microplastics collect in rivers, lakes, and oceans around the world, but so far, little is known about their effects on aquatic organisms and ecosystems. What Lönnstedt and Eklöv reported was alarming: They had exposed larvae of European perch maintained in aquaria at the research station to microplastics and found that they had decreased growth and altered feeding and behavior. Microplastics made the larvae less responsive to chemical warning signals and more likely to be eaten by pike in a series of predation experiments, the pair further reported. In an accompanying Perspective, Chelsea Rochman of the University of Toronto in Canada wrote that the study “marks an important step toward understanding of microplastics” and was relevant to policymakers. ……

….. In the report of its “preliminary investigation,” the UU panel sided with Lönnstedt. She and Eklöv had explained everything “in a satisfactory and credible manner,” wrote the panel, which asked UU to “take diligent steps to restore the reputation of the accused.” But the panel’s report didn’t provide detailed rebuttals of the long list of problems provided by Sundin and Jutfelt, who say that the investigation was superficial. ….. 

Much may now depend on the conclusions of an expert group on misconduct at Sweden’s Central Ethical Board, which is doing its own, independent investigation. Jutfelt says he’s hopeful because it appears that the group is “doing a more thorough job.” Lönnstedt says she’s not worried about the outcome. A spokesperson for the board says it is not clear when it will wrap up the inquiry. 

Microplastic misconduct Foto: Uppsala universitet

The Ethics Review Board has now reported and it is clear that this “politically correct” paper was fabricated. Uppsala University’s so-called investigation is also shown to have been less than serious and merely carried out a whitewash of their own staff.


 

Second European Mars lander (Schiaparelli) also lost (after Beagle 2 in 2003)

October 20, 2016

While the ExoMars Trace Gas Orbiter by the European/Russian space agencies (ESA/Roscosmos) seems to have successfully entered the correct orbit around Mars, ESA’s Mars lander, Schiaparelli seems to have been lost on its way down to the surface.

schiaparelli-descent image-esa

schiaparelli-descent image-esa

BBC: 

There are growing fears a European probe that attempted to land on Mars on Wednesday has been lost. Tracking of the Schiaparelli robot’s radio signals was dropped less than a minute before it was expected to touch down on the Red Planet’s surface.

Satellites at Mars have attempted to shed light on the probe’s status, so far without success. One American satellite even called out to Schiaparelli to try to get it to respond. The fear will be that the robot has crashed and been destroyed. The European Space Agency, however, is a long way from formally calling that outcome. Its engineers will be running through “fault trees” seeking to figure out why communication was lost and what they can do next to retrieve the situation.

This approach could well last several days. 

One key insight will come from Schiaparelli’s “mothership” – the Trace Gas Orbiter (TGO). As Schiaparelli was heading down to the surface, the TGO was putting itself in a parking ellipse around Mars. But it was also receiving telemetry from the descending robot.

If the lander is indeed lost, it will be the second failure of a European Mars lander after the failure of Beagle 2 in 2003.

Beagle 2 was a British landing spacecraft that formed part of the European Space Agency’s 2003 Mars Express mission. The craft lost contact with Earth during its final descent and its fate was unknown for over twelve years. Beagle 2 is named after HMS Beagle, the ship used by Charles Darwin.

The spacecraft was successfully deployed from the Mars Express on 19 December 2003 and was scheduled to land on the surface of Mars on 25 December; however, no contact was received at the expected time of landing on Mars, with the ESA declaring the mission lost in February 2004, after numerous attempts to contact the spacecraft were made.

Beagle 2‘s fate remained a mystery until January 2015, when it was located intact on the surface of Mars in a series of images from NASA’s Mars Reconnaissance Orbiter HiRISE camera. The images suggest that two of the spacecraft’s four solar panels failed to deploy, blocking the spacecraft’s communications antenna.

The ESA’s plans and budget for landing a six-wheeled roving vehicle on Mars in 2021 will face further critical scrutiny. The rover is expected “to use some of the same technology as Schiaparelli, including its doppler radar to sense the distance to the surface on descent, and its guidance, navigation and control algorithms”.

ESA has an annual budget of about €5.25 billion.

Of course the EU sees the ESA as a matter of prestige first (and science, only second) which does help to protect the budget.

Perhaps some “frugal engineering” (a la ISRO) is called for.


 

What is “statistically significant” is not necessarily significant

October 12, 2016

“Statistical significance” is “a mathematical machine for turning baloney into breakthroughs, and flukes into funding” – Robert Matthews.


Tests for statistical significance generating the p value are supposed to give the probability of the null hypothesis (that the observations are not a real effect and fall within the bounds of randomness). So a low p value only indicates that the null hypothesis has a low probability and therefore it is considered “statistically significant” that the observations do, in fact, describe a real effect. Quite arbitrarily it has become the custom to use 0.05 (5%) as the threshold p-value to distinguish between “statistically significant” or not. Why 5% has become the “holy number” which separates acceptance for publication and rejection, or success from failure is a little irrational. Actually what “statistically significant” means is that “the observations may or may not be a real effect but there is a low probability that they are entirely due to chance”.

Even when some observations are considered just “statistically significant” there is a 1:20 chance that they are not. Moreover it is conveniently forgotten that statistical significance is called for only when we don’t know. In a coin toss there is certainty (100% probability) that the outcome will be a heads or a tail or a “lands on its edge”. Thereafter to assign a probability to one of the only 3 outcomes possible can be helpful – but it is a probability constrained within the 100% certainty of the 3 outcomes. If a million people take part in a lottery, then the 1: 1,000,000 probability of a particular individual winning has significance because there is 100% certainty that one of them will win. But when conducting clinical tests for a new drug, it is often so that there is no certainty anywhere to provide a framework and a boundary within which to apply a probability.

A new article in Aeon by David Colquhoun, Professor of pharmacology at University College London and a Fellow of the Royal Society, addresses The Problem with p-values.

In 2005, the epidemiologist John Ioannidis at Stanford caused a storm when he wrote the paper ‘Why Most Published Research Findings Are False’,focusing on results in certain areas of biomedicine. He’s been vindicated by subsequent investigations. For example, a recent article found that repeating 100 different results in experimental psychology confirmed the original conclusions in only 38 per cent of cases. It’s probably at least as bad for brain-imaging studies and cognitive neuroscience. How can this happen?

The problem of how to distinguish a genuine observation from random chance is a very old one. It’s been debated for centuries by philosophers and, more fruitfully, by statisticians. It turns on the distinction between induction and deduction. Science is an exercise in inductive reasoning: we are making observations and trying to infer general rules from them. Induction can never be certain. In contrast, deductive reasoning is easier: you deduce what you would expect to observe if some general rule were true and then compare it with what you actually see. The problem is that, for a scientist, deductive arguments don’t directly answer the question that you want to ask.

What matters to a scientific observer is how often you’ll be wrong if you claim that an effect is real, rather than being merely random. That’s a question of induction, so it’s hard. In the early 20th century, it became the custom to avoid induction, by changing the question into one that used only deductive reasoning. In the 1920s, the statistician Ronald Fisher did this by advocating tests of statistical significance. These are wholly deductive and so sidestep the philosophical problems of induction.

Tests of statistical significance proceed by calculating the probability of making our observations (or the more extreme ones) if there were no real effect. This isn’t an assertion that there is no real effect, but rather a calculation of what wouldbe expected if there were no real effect. The postulate that there is no real effect is called the null hypothesis, and the probability is called the p-value. Clearly the smaller the p-value, the less plausible the null hypothesis, so the more likely it is that there is, in fact, a real effect. All you have to do is to decide how small the p-value must be before you declare that you’ve made a discovery. But that turns out to be very difficult.

The problem is that the p-value gives the right answer to the wrong question. What we really want to know is not the probability of the observations given a hypothesis about the existence of a real effect, but rather the probability that there is a real effect – that the hypothesis is true – given the observations. And that is a problem of induction.

Confusion between these two quite different probabilities lies at the heart of why p-values are so often misinterpreted. It’s called the error of the transposed conditional. Even quite respectable sources will tell you that the p-value is the probability that your observations occurred by chance. And that is plain wrong. …….

……. The problem of induction was solved, in principle, by the Reverend Thomas Bayes in the middle of the 18th century. He showed how to convert the probability of the observations given a hypothesis (the deductive problem) to what we actually want, the probability that the hypothesis is true given some observations (the inductive problem). But how to use his famous theorem in practice has been the subject of heated debate ever since. …….

……. For a start, it’s high time that we abandoned the well-worn term ‘statistically significant’. The cut-off of P < 0.05 that’s almost universal in biomedical sciences is entirely arbitrary – and, as we’ve seen, it’s quite inadequate as evidence for a real effect. Although it’s common to blame Fisher for the magic value of 0.05, in fact Fisher said, in 1926, that P= 0.05 was a ‘low standard of significance’ and that a scientific fact should be regarded as experimentally established only if repeating the experiment ‘rarely fails to give this level of significance’.

The ‘rarely fails’ bit, emphasised by Fisher 90 years ago, has been forgotten. A single experiment that gives P = 0.045 will get a ‘discovery’ published in the most glamorous journals. So it’s not fair to blame Fisher, but nonetheless there’s an uncomfortable amount of truth in what the physicist Robert Matthews at Aston University in Birmingham had to say in 1998: ‘The plain fact is that 70 years ago Ronald Fisher gave scientists a mathematical machine for turning baloney into breakthroughs, and flukes into funding. It is time to pull the plug.’ ………

Related: Demystifying the p-value


 

2016 Ig Nobels are more embarrassing than satirical

September 23, 2016

The problem with the Ig Nobels has become that they actually take themselves seriously. Unfortunately what should be satirical and irreverent has become an embarrassing example of politically correct, science “humour”. The awards have turned into a glorification of non-science, science fraud and stupidity.


The 2016 Ig Nobel Prize Winners

 The 2016 Ig Nobel Prizes were awarded on Thursday night, September 22, 2016 at the 26th First Annual Ig Nobel Prize Ceremony, at Harvard’s Sanders Theatre. The ceremony was webcast live.

REPRODUCTION PRIZE [EGYPT] — The late Ahmed Shafik, for studying the effects of wearing polyester, cotton, or wool trousers on the sex life of rats, and for conducting similar tests with human males.

REFERENCE: “Effect of Different Types of Textiles on Sexual Activity. Experimental study,” Ahmed Shafik, European Urology, vol. 24, no. 3, 1993, pp. 375-80.

REFERENCE: “Contraceptive Efficacy of Polyester-Induced Azoospermia in Normal Men,” Ahmed Shafik, Contraception, vol. 45, 1992, pp. 439-451.


ECONOMICS PRIZE [NEW ZEALAND, UK] — Mark Avis, Sarah Forbes, and Shelagh Ferguson, for assessing the perceived personalities of rocks, from a sales and marketing perspective.

REFERENCE: “The Brand Personality of Rocks: A Critical Evaluation of a Brand Personality Scale,” Mark Avis, Sarah Forbes, Shelagh Ferguson, Marketing Theory, vol. 14, no. 4, 2014, pp. 451-475.

WHO ATTENDED THE CEREMONY: Mark Avis and Sarah Forbes


PHYSICS PRIZE [HUNGARY, SPAIN, SWEDEN, SWITZERLAND] — Gábor Horváth, Miklós Blahó, György Kriska, Ramón Hegedüs, Balázs Gerics, Róbert Farkas, Susanne Åkesson, Péter Malik, and Hansruedi Wildermuth, for discovering why white-haired horses are the most horsefly-proof horses, and for discovering why dragonflies are fatally attracted to black tombstones.

REFERENCE: “An Unexpected Advantage of Whiteness in Horses: The Most Horsefly-Proof Horse Has a Depolarizing White Coat,” Gábor Horváth, Miklós Blahó, György Kriska, Ramón Hegedüs, Balázs Gerics, Róbert Farkas and Susanne Åkesson, Proceedings of the Royal Society B, vol. 277 no. 1688, pp. June 2010, pp. 1643-1650.

REFERENCE: “Ecological Traps for Dragonflies in a Cemetery: The Attraction of Sympetrum species (Odonata: Libellulidae) by Horizontally Polarizing Black Grave-Stones,” Gábor Horváth, Péter Malik, György Kriska, Hansruedi Wildermuth, Freshwater Biology, vol. 52, vol. 9, September 2007, pp. 1700–9.

WHO ATTENDED THE CEREMONY: Susanne Åkesson


CHEMISTRY PRIZE [GERMANY] — Volkswagen, for solving the problem of excessive automobile pollution emissions by automatically, electromechanically producing fewer emissions whenever the cars are being tested.

REFERENCE: “EPA, California Notify Volkswagen of Clean Air Act Violations”, U.S. Environmental Protection Agency news release, September 18, 2015.


MEDICINE PRIZE [GERMANY] — Christoph Helmchen, Carina Palzer, Thomas Münte, Silke Anders, and Andreas Sprenger, for discovering that if you have an itch on the left side of your body, you can relieve it by looking into a mirror and scratching the right side of your body (and vice versa).

REFERENCE: “Itch Relief by Mirror Scratching. A Psychophysical Study,” Christoph Helmchen, Carina Palzer, Thomas F. Münte, Silke Anders, Andreas Sprenger, PLoS ONE, vol. 8, no 12, December 26, 2013, e82756.

WHO ATTENDED THE CEREMONY: Andreas Sprenger


PSYCHOLOGY PRIZE [BELGIUM, THE NETHERLANDS, GERMANY, CANADA, USA] — Evelyne Debey, Maarten De Schryver, Gordon Logan, Kristina Suchotzki, and Bruno Verschuere, for asking a thousand liars how often they lie, and for deciding whether to believe those answers.

REFERENCE: “From Junior to Senior Pinocchio: A Cross-Sectional Lifespan Investigation of Deception,” Evelyne Debey, Maarten De Schryver, Gordon D. Logan, Kristina Suchotzki, and Bruno Verschuere, Acta Psychologica, vol. 160, 2015, pp. 58-68.

WHO ATTENDED THE CEREMONY: Bruno Verschuere


PEACE PRIZE [CANADA, USA] — Gordon Pennycook, James Allan Cheyne, Nathaniel Barr, Derek Koehler, and Jonathan Fugelsang for their scholarly study called “On the Reception and Detection of Pseudo-Profound Bullshit”.

REFERENCE: “On the Reception and Detection of Pseudo-Profound Bullshit,” Gordon Pennycook, James Allan Cheyne, Nathaniel Barr, Derek J. Koehler, and Jonathan A. Fugelsang, Judgment and Decision Making, Vol. 10, No. 6, November 2015, pp. 549–563.

WHO ATTENDED THE CEREMONY: Gordon Pennycook, Nathaniel Barr, Derek Koehler, and Jonathan Fugelsang


BIOLOGY PRIZE [UK] — Awarded jointly to: Charles Foster, for living in the wild as, at different times, a badger, an otter, a deer, a fox, and a bird; and to Thomas Thwaites, for creating prosthetic extensions of his limbs that allowed him to move in the manner of, and spend time roaming hills in the company of, goats.

REFERENCE: GoatMan; How I Took a Holiday from Being Human, Thomas Thwaites, Princeton Architectural Press, 2016, ISBN 978-1616894054.

REFERENCE: Being a Beast, by Charles Foster, Profile Books, 2016, ISBN 978-1781255346.

WHO ATTENDED THE CEREMONY: Charles Foster, Thomas Thwaites. [NOTE: Thomas Thwaites’s goat suit was kindly released for Ig Nobel purposes from the exhibition ‘Platform – Body/Space’ at Het Nieuwe Instituut in Rotterdam, and will be back on display at the museum from 4 October 2016 till 8 January 2017.]


LITERATURE PRIZE [SWEDEN] — Fredrik Sjöberg, for his three-volume autobiographical work about the pleasures of collecting flies that are dead, and flies that are not yet dead.

REFERENCE: “The Fly Trap” is the first volume of Fredrik Sjöberg’s autobiographical trilogy, “En Flugsamlares Vag” (“The Path of a Fly Collector”), and the first to be published in English. Pantheon Books, 2015, ISBN 978-1101870150.

WHO ATTENDED THE CEREMONY: Fredrik Sjöberg


PERCEPTION PRIZE [JAPAN] — Atsuki Higashiyama and Kohei Adachi, for investigating whether things look different when you bend over and view them between your legs.

REFERENCE: “Perceived size and Perceived Distance of Targets Viewed From Between the Legs: Evidence for Proprioceptive Theory,” Atsuki Higashiyama and Kohei Adachi, Vision Research, vol. 46, no. 23, November 2006, pp. 3961–76.

WHO ATTENDED THE CEREMONY: Atsuki Higashiyama


 


%d bloggers like this: