Posts Tagged ‘Big Science’

Elementary particle turns out to have been a mirage as CERN serves up more inelegant physics

August 6, 2016

Current physics and the Standard Model of the Universe it describes are no longer elegant. I have no doubt that the underlying structure of the universe is simple and beautiful. But models which require more than 61 elementary particles and numerous fudge factors (dark energy and dark matter) and an increasing complexity, are ugly and do not convince. Especially when they cannot explain the four “magical” forces we observe (gravitation, magnetic, strong nuclear and the weak nuclear forces).

I have a mixture of admiration and contempt for the “Big Physics” as practised by CERN and their experiments at the Large Hadron Collider. So, I was actually quite relieved to hear that CERN has just announced that, after much publicity, they hadn’t actually detected yet another elementary particle which was not predicted by the Standard Model. Since they found some anomalous data last December they have hyped the possibility of a new extra-heavy, elementary particle. Over 500 papers have been written (and published) postulating explanations of the data anomaly and fantasising about the nature of this particle. But the data has just disappeared. The postulated particle does not exist.

I remain convinced that 90% of modern physics is all about raising questions – some genuine and some fantasised – to ensure that funding for Sledgehammer Science continues. So not to worry. CERN may not have found another elementary particle this time. But they will soon come up with another unexpected particle, preceded by much publicity and hype, which will spawn much further speculation, and, most importantly, keep the funds flowing.

New York Times:

A great “might have been” for the universe, or at least for the people who study it, disappeared Friday.  

Last December, two teams of physicists working at CERN’s Large Hadron Collider reported that they might have seen traces of what could be a new fundamental constituent of nature, an elementary particle that is not part of the Standard Model that has ruled particle physics for the last half-century.  

A bump on a graph signaling excess pairs of gamma rays was most likely a statistical fluke, they said. But physicists have been holding their breath ever since.  

If real, the new particle would have opened a crack between the known and the unknown, affording a glimpse of quantum secrets undreamed of even by Einstein. Answers to questions like why there is matter but not antimatter in the universe, or the identity of the mysterious dark matter that provides the gravitational glue in the cosmos. In the few months after the announcement, 500 papers were written trying to interpret the meaning of the putative particle.

Science Alert:

CERN made the announcement this morning at the International Conference of High Energy Physics (ICHEP) in Chicago, alongside a huge slew of new Large Hadron Collider (LHC) data.

“The intriguing hint of a possible resonance at 750 GeV decaying into photon pairs, which caused considerable interest from the 2015 data, has not reappeared in the much larger 2016 data set and thus appears to be a statistical fluctuation,” CERN announced in a press release sent via email.

Why did we ever think we’d found a new particle in the first place?

Back in December, researchers at CERN’s CMS and ATLAS experiments smashed particles together at incredibly high energies, sending subatomic particles flying out as debris.

Among that debris, the researchers saw an unexpected blip of energy in form of an excess in pairs of photons, which had a combined energy of 750 gigaelectron volts (GeV). 

The result lead to hundreds of journal article submissions on the mysterious energy signature – and prompted many physicists to hypothesise that the excess was a sign of a brand new fundamental particle, six times more massive than the Higgs boson – one that wasn’t predicted by the Standard Model of particle physics.

But, alas, the latest data collected by the LHC shows no evidence that this particle exists – despite further experiments, no sign of this 750 GeV bump has emerged since the original reading

So, we’re no closer to finding a new particle – or evidence of a new model that could explain some of the more mysterious aspects of the Universe, such as how gravity works (something the Standard Model doesn’t account for).

The Large Hadron Collider is the world’s largest and most powerful particle accelerator (Image: CERN)

The Higgs Boson that CERN claims to have found last year has turned out to be not quite the boson predicted by the Standard Model. So while the Higgs boson was supposed to be the God particle, the boson found only indicated that there were more bosons to be found. I dislike the publicity and hype that CERN generates — which is entirely about securing further funding.  (The LHC cost $4.75 billion to build and sucks up about $5 billion annually to conduct their experiments).

Constantly adding complexity to a mathematical model and the increasing use of fudge factors is usually a sign that the model is fundamentally wrong. But some great insight is usually needed to simplify and correct a mathematical model. Until that insight comes, the models are the best available and just have to be fudged and added to in an ad hoc manner, to correct flaws as they are found.

The Standard Model and its 61+ particles will have to be replaced at some point by something more basic and more simple. But that will require some new Einstein-like insight, and who knows when that might occur. But the Standard Model is inelegant. The LHC is expected to operate for another 20 years. But the very weight of the investment in the LHC means that physicists cannot build a career by being heretical or by questioning the Standard Model itself.

I miss the elegance that Physics once chased:

Physics has become a Big Science where billion dollar sledgehammers are used to crack little nuts. Pieces of nut and shell go flying everywhere and each little fragment is considered a new elementary particle. The Rutherford-Bohr model still applies, but its elementary particles are no longer considered elementary. Particles with mass and charge are given constituent particles, one having mass and no charge, and one having charge and no mass. Unexplainable forces between particles are assigned special particles to carry the force. Particles which don’t exist, but may have existed, are defined and “discovered”. Errors in theoretical models are explained away by assigning appropriate properties to old particles or defining new particles. Every new particle leaves a new slime trail across the painting. It is as if a bunch of savages are doodling upon a masterpiece. The scribbling is ugly and hides the masterpiece underneath, but it does not mean that the masterpiece is not there.

The “standard model” does not quite fit observations so new theories of dark energy and dark matter are postulated (actually just invented as fudge factors) and further unknown particles are defined. The number of elementary particle have proliferated and are still increasing. The “standard model” of physics now includes at least 61 elementary particles (48 fermions and 13 bosons). Even the ancient civilisations knew better than to try and build with too many “standard” bricks. Where did simplicity go? Just the quarks can be red, blue or green. They can be up, down, charm, strange, top or bottom quarks. For every type of quark there is an antiquark. Electrons, muons and taus have each their corresponding neutrinos. And they all have their anti-particles.Gluons come in eight colour combinations. There are four electroweak bosons and there ought to be only one higgs boson. But who knows? CERN could find some more. I note that fat and thin or warm and cool types of particles have yet to be defined. Matter and antimatter particles on meeting each other, produce a burst of energy as they are annihilated. If forces are communicated by particles, gravity by gravitons and light by photons then perhaps all energy transmission can give rise to a whole new family of elementary particles.

The 61 particles still do not include the graviton or sparticles or any other unknown, invisible, magic particles that may go to making up dark matter and dark energy. Some of the dark matter may be stealthy dark matter and some may be phantom dark matter. One might think that when dark matter goes phantom, it ought to become visible, but that would be far too simple.  The level of complexity and apparent chaos is increasing. Every new particle discovered requires more money and Bigger Science to find the next postulated elementary particle.

When CERN claimed to have found the God Particle – the higgs boson – they still added the caveat that it was just one kind of the higgs boson and there could be more as yet unknown ones to come. So the ultimate elementary particle was certainly not the end of the road. Good grief! The end of the road can never be found. That might end the funding. And after all, even if the God Particle has been found, who created God? Guess how much all that is going to cost?


 

Advertisements

Science by Press Release: Overhyped “gravity waves” were just dust

September 27, 2014

In March this year there was a great deal of publicity about the detection of gravity waves after the Big Bang. There were Press Releases and promotional videos and blanket coverage in the media. There was talk about Nobel prizes. Not unlike the massive publicity mounted by CERN about the discovery of (or more accurately the potential discovery of a possible indication of a particle not inconsistent with) the Higgs boson particle. After that non-discovery also there was talk about the CERN team being awarded a Nobel Prize! Even a member of the Nobel Committee was taken in by the publicity and fought for CERN the organisation, to be awarded  the physics prize. Most of the campaign in favour of CERN was initiated and orchestrated by the PR department at CERN and the CERN fan-club.

Now it turns out that the gravity waves may well have been cosmic dust.

BBCOne of the biggest scientific claims of the year has received another set-back.

In March, the US BICEP team said it had found a pattern on the sky left by the rapid expansion of space just fractions of a second after the Big Bang. The astonishing assertion was countered quickly by others who thought the group may have underestimated the confounding effects of dust in our own galaxy.

That explanation has now been boosted by a new analysis from the European Space Agency’s (Esa) Planck satelliteIn a paper published on the arXiv pre-print server, Planck’s researchers find that the part of the sky being observed by the BICEP team contained significantly more dust than it had assumed.

BIG Science of this kind needs BIG FUNDS. BIG FUNDS need BIG CLAIMS. A BIG CLAIM followed by a retraction is seen to be better – from a publicity perspective – than an uninteresting claim or no claim at all. The people who control the BIG FUND purse strings are generally governments in the form of bureaucrats and administrators and politicians. They don’t usually read the scientific papers themselves. But they do read the Press Releases and take note of the number of column-inches of newspaper articles that are generated. Promotional videos with many hits on You-Tube are also taken note of.

This is science by Press Release. Scientific quality is now judged by the amount of publicity generated.

I am not competent to judge the technical content of these “discoveries” and therefore have to rely on others who are. And so I take note of what Sean Carroll, a CalTech physicist writes on his blog:

Ever since we all heard the exciting news that the BICEP2 experiment had detected “B-mode” polarization in the cosmic microwave background — just the kind we would expect to be produced by cosmic inflation at a high energy scale — the scientific community has been waiting on pins and needles for some kind of independent confirmation, so that we could stop adding “if it holds up” every time we waxed enthusiastic about the result. And we all knew that there was just such an independent check looming, from the Planck satellite. The need for some kind of check became especially pressing when some cosmologists made a good case that the BICEP2 signal may very well have been dust in our galaxy, rather than gravitational waves from inflation (Mortonson and Seljak; Flauger, Hill, and Spergel).

Now some initial results from Planck are in … and it doesn’t look good for gravitational waves.

Planck intermediate results. XXX. The angular power spectrum of polarized dust emission at intermediate and high Galactic latitudes
Planck Collaboration: R. Adam, et al.

planck-bmode-spectrum

The light-blue rectangles are what Planck actually sees and attributes to dust. The black line is the theoretical prediction for what you would see from gravitational waves with the amplitude claimed by BICEP2. As you see, they match very well. That is: the BICEP2 signal is apparently well-explained by dust.

….. Planck has observed the whole sky, including the BICEP2 region, although not in precisely the same wavelengths. With a bit of extrapolation, however, they can use their data to estimate how big a signal should be generated by dust in our galaxy. The result fits very well with what BICEP2 actually measured. It’s not completely definitive — the Planck paper stresses over and over the need to do more analysis, especially in collaboration with the BICEP2 team — but the simplest interpretation is that BICEP2’s B-modes were caused by local contamination, not by early-universe inflation. ….. 

“Science by consensus” and “science by press release” and even “science by press release about the consensus” have infected much of what passes for science today.

RIP: Augusto Odone – creator of Lorenzo’s oil

October 26, 2013

Augusto Odone, a former World Bank economist who challenged the world’s medical establishment and created Lorenzo’s Oil has died in Italy aged 80,

The story of Lorenzo’s Oil is now well enough known (and not least because of the 1992  film). Augusto Odone’s effort is the ultimate example of Citizen Science prevailing over Big Science, of a lone, non-establishment individual battling, persevering and triumphing over an establishment view.

Augusto Odone with his son Lorenzo

Augusto Odone refused to accept medical opinion that his son Lorenzo would die in childhood – BBC

And it is becoming increasingly obvious that Big Science, whether in Medicine or in Physics or in Climate Change suffers from the fundamental weakness which results from a massive inertia which prevents the non-establishment view from surfacing. Consensus Science smothers creativity and ingenuity.

Science is losing its ability to self-correct

October 20, 2013

With the explosion in the number of researchers, the increasing rush to publication and the corresponding explosion in traditional and on-line journals as avenues of publication, The Economist carries an interesting article making the point that the assumption that science is self-correcting is under extreme pressure. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.”

The field of psychology and especially social psychology has been much in the news with the dangers of “priming”.

“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.

Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.

It is not just “soft” fields which have problems. It is apparent that in medicine a large number of published results cannot be replicated

… irreproducibility is much more widespread. A few years ago scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. According to a piece they wrote last year in Nature, a leading scientific journal, they were able to reproduce the original results in just six. Months earlier Florian Prinz and his colleagues at Bayer HealthCare, a German pharmaceutical giant, reported in Nature Reviews Drug Discovery, a sister journal, that they had successfully reproduced the published results in just a quarter of 67 seminal studies.

The governments of the OECD, a club of mostly rich countries, spent $59 billion on biomedical research in 2012, nearly double the figure in 2000. One of the justifications for this is that basic-science results provided by governments form the basis for private drug-development work. If companies cannot rely on academic research, that reasoning breaks down. When an official at America’s National Institutes of Health (NIH) reckons, despairingly, that researchers would find it hard to reproduce at least three-quarters of all published biomedical findings, the public part of the process seems to have failed.

It is not just that research results cannot be replicated. So much of what is published is just plain wrong and the belief that science is self-correcting is itself under pressure

Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think. …… Statistical mistakes are widespread. The peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate. Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.” 

…… In 2005 John Ioannidis, an epidemiologist from Stanford University, caused a stir with a paper showing why, as a matter of statistical logic, the idea that only one such paper in 20 gives a false-positive result was hugely optimistic. Instead, he argued, “most published research findings are probably false.” 

The tendency to only publish positive results leads also to statistics being skewed to allow results to be shown as being poitive

The negative results are much more trustworthy; …….. But researchers and the journals in which they publish are not very interested in negative results. They prefer to accentuate the positive, and thus the error-prone. Negative results account for just 10-30% of published scientific literature, depending on the discipline. This bias may be growing. A study of 4,600 papers from across the sciences conducted by Daniele Fanelli of the University of Edinburgh found that the proportion of negative results dropped from 30% to 14% between 1990 and 2007. Lesley Yellowlees, president of Britain’s Royal Society of Chemistry, has published more than 100 papers. She remembers only one that reported a negative result.

…. Other data-heavy disciplines face similar challenges. Models which can be “tuned” in many different ways give researchers more scope to perceive a pattern where none exists. According to some estimates, three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”

The idea of peer-review being some kind of a quality check of the results being published is grossly optimistic

The idea that there are a lot of uncorrected flaws in published studies may seem hard to square with the fact that almost all of them will have been through peer-review. This sort of scrutiny by disinterested experts—acting out of a sense of professional obligation, rather than for pay—is often said to make the scientific literature particularly reliable. In practice it is poor at detecting many types of error.

John Bohannon, a biologist at Harvard, recently submitted a pseudonymous paper on the effects of a chemical derived from lichen on cancer cells to 304 journals describing themselves as using peer review. An unusual move; but it was an unusual paper, concocted wholesale and stuffed with clangers in study design, analysis and interpretation of results. Receiving this dog’s dinner from a fictitious researcher at a made up university, 157 of the journals accepted it for publication. ….

……. As well as not spotting things they ought to spot, there is a lot that peer reviewers do not even try to check. They do not typically re-analyse the data presented from scratch, contenting themselves with a sense that the authors’ analysis is properly conceived. And they cannot be expected to spot deliberate falsifications if they are carried out with a modicum of subtlety.

Fraud is very likely second to incompetence in generating erroneous results, though it is hard to tell for certain. 

And then there is the issue that all results from Big Science can never be replicated because the cost of the initial work is so high. Medical research or clinical trials are also extremely expensive. Journals have no great interest to publish replications (even when they are negative). And then, to compound the issue, those who provide funding are less likely to extend funding merely for replication or for negative results.

People who pay for science, though, do not seem seized by a desire for improvement in this area. Helga Nowotny, president of the European Research Council, says proposals for replication studies “in all likelihood would be turned down” because of the agency’s focus on pioneering work. James Ulvestad, who heads the division of astronomical sciences at America’s National Science Foundation, says the independent “merit panels” that make grant decisions “tend not to put research that seeks to reproduce previous results at or near the top of their priority lists”. Douglas Kell of Research Councils UK, which oversees Britain’s publicly funded research argues that current procedures do at least tackle the problem of bias towards positive results: “If you do the experiment and find nothing, the grant will nonetheless be judged more highly if you publish.” 

Trouble at the lab 

The rubbish will only decline when there is a cost to publishing shoddy work which outweighs the gains of adding to a researcher’s list of publications. At some point researchers will need to be held liable and accountable for their products (their publications). Not just for fraud or misconduct but even for negligence or gross negligence when they do not carry out their work using the best available practices of the field. These are standards that some (but not all) professionals are held to and there should be no academic researcher who is not also subject to such a standard. If peer-review is to recover some of its lost credibility then anonymous reviews must disappear and reviewers must be much more explicit about what they have checked and what they have not.

Some Nobel prizes are quite base

October 10, 2013

Now we come to the less important Nobel Prizes; Literature today, Peace tomorrow and Economics on Monday. I see a clear hierarchy of “nobility” for the 6 awards:

  1. Medicine or Physiology
  2. Chemistry
  3. Physics
  4. Literature
  5. Economics, and
  6. Peace.

The first two still remain fairly true to Alfred Nobel’s intentions. I put Medicine on a higher plane of “nobility” than Chemistry but would have no great quarrel with the order being reversed. Physics definitely has become less “noble”. The whole field has been somewhat degraded by the advent of Big Science and the use of massive “sledgehammers” to try and hammer the universe into submission. But this approach only gives incremental (and often infinitesimal) advances and represent no breakthroughs in thought. I suspect that the real advances will still come from individuals and not by the bureaucratic approach to Big Science where the concept seems to be that advances are directly proportional to the amount of money spent.

I used to believe in the Literature Prize but of late it has become a little subservient to political correctness.

The two “base” prizes with little trace of any nobility are those for Economics and for Peace. Economics is more about social behaviour and there is very little “science” about it. Economic theories have – at best – been of short lived utility.  At worst they have led to global crises.

The Peace prize, of course, has just become a nonsense and brings disgrace to Nobel’s intentions.


%d bloggers like this: