Posts Tagged ‘Climate model’

Another unverifiable doomsday model predicts 4°C rise by 2100

December 31, 2013

What must first be noted is that the lead author, Steve Sherwood,  is from the Climate Change Research Centre, University of New South Wales and is a colleague of Chris Turney – the global warming cheer-leader currently stuck in the Antarctic ice. The paper is largely unfounded speculation – no evidence or measurements in sight –  but speculation alarmist enough for Nature to publish it. The paper – according to the Nature Editor

offers an explanation for the long-standing uncertainty in predictions of global warming derived from climate models. Uncertainties in predicted climate sensitivity — the magnitude of global warming due to an external influence — range from 1.5° C to 5° C for a doubling of atmospheric CO2. It has been assumed that uncertainties in cloud simulations are at the root of the model disparities, and here Steven Sherwood et al. examine the output of 43 climate models and demonstrate that about half of the total uncertainty in climate sensitivity can be traced to the varying treatment of mixing between the lower and middle troposphere — and mostly in the tropics. When constrained by observations, the authors’ modelling suggests that climate sensitivity is likely to exceed 3° C rather than the currently estimated lower limit of 1.5° C, thereby constraining model projections towards more severe future warming.

Clouds are not well understood it seems but they are the answer!

The time-scale for their predictions – till 2100 is sufficiently far away that nothing can be confirmed or denied.

Presumably Sherwood was one of those advising the pilgrims trapped in the Antarctic.

Spread in model climate sensitivity traced to atmospheric convective mixing, Steven C. Sherwood, Sandrine Bony & Jean-Louis Dufresne, Nature 505, 37–42, doi:10.1038/nature12829

Abstract:Equilibrium climate sensitivity refers to the ultimate change in global mean temperature in response to a change in external forcing. Despite decades of research attempting to narrow uncertainties, equilibrium climate sensitivity estimates from climate models still span roughly 1.5 to 5 degrees Celsius for a doubling of atmospheric carbon dioxide concentration, precluding accurate projections of future climate. The spread arises largely from differences in the feedback from low clouds, for reasons not yet understood. Here we show that differences in the simulated strength of convective mixing between the lower and middle tropical troposphere explain about half of the variance in climate sensitivity estimated by 43 climate models. The apparent mechanism is that such mixing dehydrates the low-cloud layer at a rate that increases as the climate warms, and this rate of increase depends on the initial mixing strength, linking the mixing to cloud feedback. The mixing inferred from observations appears to be sufficiently strong to imply a climate sensitivity of more than 3 degrees for a doubling of carbon dioxide. This is significantly higher than the currently accepted lower bound of 1.5 degrees, thereby constraining model projections towards relatively severe future warming.

It all smacks of post-rationalisation.

Garbage In Garbage Out.

Broken link between carbon dioxide and global warming could be causing a paradigm shift in climate change theory

November 1, 2013

Dr. David Stockwell writing in Quadrant suggests that a paradigm shift in global warming theory may be underway.

Remember Thomas Kuhn and his paradigm shift?  According to his Structure of Scientific Revolutions, theories change only when anomalous observations stress the ”dominant paradigm” to the point that it becomes untenable. Until then, failure of a result to conform to the prevailing paradigm is not seen as refuting the dominant theory, but explained away as a mistake of the researchers, errors in the data, within the range of uncertainty, and so on. Only at the point of crisis does science become open to a new paradigm.  So, does Kuhn inform the current climate debate, help identify important information or an alternative paradigm?

The link between carbon dioxide concentrations and global warming effects is not based on any direct evidence. It is based on the absorption spectrum of the gas and then on assumptions about the “forcing” caused by the trace amounts of the gas in the atmosphere on other parameters such as clouds. This assumed impact is “confirmed” by correlations between global temperature (or temperature proxies) and carbon dioxide concentration and the assumption that anthropogenic effects (fossil fuel combustion)  dominate the undoubted increase of carbon dioxide concentrations in the atmosphere.  It is said that there is no better correlation but that is not true. Ocean cycles have been shown to have a much stronger correlation. But this assumed link is now looking decidedly shaky as long-ignored parameters (solar effects and the oceans) are also taken into account. Carbon dioxide concentrations have been increasing steadily but global temperatures have remained still for the last 17 – 18 years. In fact over the last 10 years global temperatures have shown a slight decline. Climate models have used the assumed carbon dioxide effect as input. But the predictions they have made about antarctic ice extent, sea level, hurricanes and stormy weather and hot spots in the troposphere have all proved wrong. Real global temperatures are increasingly diverging from these model results. The increase of information is increasing the uncertainty which is very odd.

Climate models can be seen as encapsulating the dominant theory, even though they are composed of many different theories regarding land, the ocean and atmosphere.  Despite their differences they are also similar in many ways, sharing terminology such as the ‘radiative kernel’.  Lets agree, for the purpose of argument, that the dominant AGW paradigm is of global temperature’s high sensitivity to  CO2 doubling, resulting in an increase of around 3°C, which appears to be about the central estimate of the climate models.

Does the 15-year ‘pause’ in global temperatures stress this theory? Certainly to some, the stress has already reached a ‘crisis’; while to others the divergence can be explained away by natural variation, uncertainty, and errors in the data. 

Do failed models and their predictions of increasing extreme events, like hurricanes, droughts and floods, stress the climate models?  Possibly not.  From a physical perspective, these phenomena lie at the boundaries of the theory.  Hurricanes, droughts and floods are ‘higher order’ statistics — extremes not climate averages. Surface temperature is only a part of the greater global climate system. Because anomalous behavior at the margins can be discarded without sacrificing the main theory, their power to confirm or reject the dominant paradigm is somewhat limited. 

Ocean heat content, however, is in a unique position.  The world’s oceans store over 90% of the heat in the climate system.  Arguably, therefore, increases in ocean heat determine overall global warming.  Ocean heat represents the physical bulk of the global heat store, and so should carry the most weight in our assessment of the status of AGW. Observations of ocean heat uptake represent the crucial experiment  — observations capable of decisively dismantling a theory despite its widespread acceptance in the scientific community.  The ARGO project to monitor ocean heat with thousands of drifting buoys is the crucial experiment of the AGW stable. 

A number of climate bloggers have remarked on the very low rate of ocean heat uptake (here, and here, and here), much lower than predicted by the models (herehere, and here).  The last link is about Nic Lewis, a coauthor on Otto et al. 2013, who feels that recent findings of low climate sensitivity, many based on ocean heat content, have led a number of prominent IPCC authors to abandon the higher estimates of climate sensitivity. That may not be a ‘catastrophe’ for the dominant AGW paradigm, but it is certainly a lurch by insiders towards the lower ends of risk and urgency. 

The IPCC panel preparing the AR5 report may not have been devastated when they changed the likely range of climate sensitivity, which had stood at 4.5–2°C since 1990. The lower extimate has now been dropped from 2°C to 1.5°C. What has not been appreciated is that increasing the range of uncertainty is impossible in a period of Kuhnian ‘normal science’, where new information always decreases uncertainty. 

The ‘blow-out’ in the range of likely climate sensitivity can only mean one thing: We are no longer in a period of ‘normal’ science, but entering a period of ‘paradigm shift’. ….

…..

Dr. Stockwell concludes:

Climate skeptics don’t want to say we told you so but, well, we told you so. Even though we do not yet have an accepted theory of solar influence, there are 25 unique models in the AR5-sponsored CIMP5 archive, most with a climate sensitivity untenable on observations from the last decade. 

Take out Occam’s razor and cull them – deep and hard.

Dr David Stockwell, Adjunct Researcher, Central Queensland University

Simple harmonic model – without carbon dioxide – fits reality better than the IPCC climate models

August 16, 2013

A new post at the Norwegian GeoForskning (Geological Research) site by Jan-Erik Solheim and Ole Humlum is quite significant I think. Solheim is Professor (emeritus) at  Institutt for teoretisk astrofysikk, University of Oslo while Humlum is professor of Physical Geography at the University of Oslo and an adjunct Professor at UNIS (University Centre in Svalbard). The post shows that a simple harmonic model (movements of the sun, moon and planets together with linear trends) provides a better fit to the global temperature data since 1850 and likely a better predictor than the assembly of 44 climate models used by the IPCC. They find no signal since the 1950’s which could correspond to any significant impact of carbon dioxide concentration and find no need to include such an influence. If such an effect is present its influence is miniscule.

Models need to be parsimonius to exclude parameters and mechanisms whose effects cannot be discerned. Otherwise they cannot be anchored in reality. A problem with many of the so-called climate models is that they include hypothetical effects which cannot be discerned in the available data, then apply forcing feedbacks to such hypothetical effects and then conclude that the results are valid!

If we’d had a warming due to CO2, this should appear as a deviation from the simple harmonic model since 1950. There are no signs of any additional heating due to CO2 as IPCC claims in their reports. Also CO2 effects of climate models for the IPCC based are exaggerated. The net effect of CO2 is thus so modest that it can not be seen in this data.

A simple, empirical, harmonic climate model

by Jan-Erik Solheim and Ole Humlum

(The paper is in Norwegian and this English version is from the HockeySchtick)
(more…)

Climate Models and Pepsodent

June 10, 2013

You’ll wonder where the warming went

when you brush your models with excrement

                                                                                                                                         with apologies to Pepsodent

Climate models just aren’t good enough – yet.

As real observations increasingly diverge from model results, the global warming establishment is reacting in 2 ways:

  1. Denial by the Warmist orthodoxy who prefer model results to real data , and
  2. Real scientists who have begun to questions the assumptions on which these models are based.

Two articles have recently been published in the mainstream scientific literature which question climate models.

1. What Are Climate Models Missing?Bjorn Stevens and Sandrine BonyScience, 31 May 2013, Vol. 340 no. 6136 pp. 1053-1054 , DOI: 10.1126/science.1237554

Abstract: Fifty years ago, Joseph Smagorinsky published a landmark paper (1) describing numerical experiments using the primitive equations (a set of fluid equations that describe global atmospheric flows). In so doing, he introduced what later became known as a General Circulation Model (GCM). GCMs have come to provide a compelling framework for coupling the atmospheric circulation to a great variety of processes. Although early GCMs could only consider a small subset of these processes, it was widely appreciated that a more comprehensive treatment was necessary to adequately represent the drivers of the circulation. But how comprehensive this treatment must be was unclear and, as Smagorinsky realized (2), could only be determined through numerical experimentation. These types of experiments have since shown that an adequate description of basic processes like cloud formation, moist convection, and mixing is what climate models miss most.

2. Emerging selection bias in large-scale climate change simulations, Kyle L. Swanson, Geophysical Research Letters, online 16th May 2013, DOI: 10.1002/grl.50562

Abstract: Climate change simulations are the output of enormously complicated models containing resolved and parameterized physical processes ranging in scale from microns to the size of the Earth itself. Given this complexity, the application of subjective criteria in model development is inevitable. Here we show one danger of the use of such criteria in the construction of these simulations, namely the apparent emergence of a selection bias between generations of these simulations. Earlier generation ensembles of model simulations are shown to possess sufficient diversity to capture recent observed shifts in both the mean surface air temperature as well as the frequency of extreme monthly mean temperature events due to climate warming. However, current generation ensembles of model simulations are statistically inconsistent with these observed shifts, despite a marked reduction in the spread among ensemble members that by itself suggests convergence towards some common solution. This convergence indicates the possibility of a selection bias based upon warming rate. It is hypothesized that this bias is driven by the desire to more accurately capture the observed recent acceleration of warming in the Arctic and corresponding decline in Arctic sea ice. However, this convergence is difficult to justify given the significant and widening discrepancy between the modeled and observed warming rates outside of the Arctic.

                                                                                                                                 

We learn about climate only when the models are wrong!

March 29, 2013

When a forecast based on a mathematical model is correct, we learn nothing.

A mathematical model is merely a theory, a simplification of reality or an approximation to the real world. By definition a mathematical model is a hypothesis.  When forecasts are incorrect, we can return to our model and improve it and make a new hypothesis. A forecast is then a test of the model but in just one particular set of circumstances. Being correct does not prove the theory behind the model. It does of course add to the body of evidence that the model may be a satisfactory representation of reality and it does allow further forecasts to be made without tweaking the model. For learning to take place the mathematical model must be the falsifiable hypothesis of the scientific method.

It seems to me that Solar Science has a much healthier (scientifically) attitude to models and forecasts than “Climate Science”. When observations don’t match a climate forecast, the observations are impugned rather than the models being improved. This is, I think, because the forecast climate results have been used to establish huge revenue flows in the political arena (whether as taxes or carbon credits or just as research funding). There has been a vested interest in denying the observations and calling the science “settled”. Once the science is “settled”  the climate forecast and its underlying model become sacrosanct and take on the certainty of prophecy. Instead of being falsifiable hypotheses, climate model forecasts have taken on the character of unfalsifiable prophecies!

No scientist would presume to claim that we know or understand all solar effects. Or that we know and understand the role of the oceans or of the water vapour and dust and aerosols in the atmosphere. “Climate” is contained in the thin, chaotic layer of atmosphere which surrounds us. Yet “Climate Science” makes the arrogant assumption that the effect of trace amounts of carbon dioxide on climate is known definitively. Filling a real greenhouse with higher concentrations of carbon dioxide does not make that greenhouse any warmer than one filled with normal air – but the plants do grow faster with access to the additional CO2!! But – claim the climate priesthood –  in the real atmosphere, carbon dioxide causes other forcings (clouds? aerosols? precipitation effects?) which maximise warming which means that our model is still valid. Why not just admit that we don’t know what we don’t know?

The behavioural issue of course is whether it is worth trying to control something as poorly understood as climate rather than ensuring that we have the wherewithal to adapt to whatever changes may come. Another ice age will surely come whether in 10 years or a 100 years or 2,000. It will then be our ability to harness all available energy sources around us which will determine our capacity to adapt.

Learning from forecasts when they are wrong – not just in science but also in business and project management and technology development – has long been a hobby-horse of mine and is why forecasts need to be wrong.

When there is no difference there is no learning.

  • I take prophecies to be a promise about the future  based primarily on faith and made by prophets , witchdoctors, soothsayers and politicians such as ”You will be doomed to eternal damnation if you don’t do as I say”,
  • I take “forecasts” to be an estimate of future conditions based on known data with the use of calculations, logic, judgement, some intuition and even some faith. They are extrapolations of historical conditions to anticipate – and thereby plan for -future conditions.

……. Over the last 30 years I have spent of a lot of time conducting and participating in reviews. Reviews of research projects, of construction projects, of organisations and processes, of designs, of strategies and action plans, of businesses and of companies. The common features  in all these different reviews, that I have found the most penetrating, have been the comparisons not only between forecast values  and actual values, (which may be any values indicating performance and capable of being extrapolated), but also between past forecasts and current forecasts.

Whether considering construction progress or costs or sales figures or cash flow or profit or number of patents applied for, it is the differences between forecast and actual values, or values forecast before and values forecast later which have led to learning. In all these fields we are in the area of the behaviour of complex systems; and where people and their behaviour is involved any system is inevitably a complex system.

When a forecast is fulfilled there is usually an air of congratulation, satisfaction and self-adulation and this leads to a deadly complacency that everything is “settled science” and well understood. In any enterprise of any kind, that kind of complacency is the kiss of death. It is the differences which lead to questioning, to proper scientific scepticism, to further investigation and ultimately to an increase of understanding and – perhaps – a better forecast. (Of course, ignoring all such differences  and to merely “continue as before” can be equally fatal).

Which brings me to climate (which is not a science by any stretch of the imagination) and solar cycles. They are both in the realm not only of where “what we know is a great deal less than what we don’t know” but they are also both in the region where “we don’t even know what we don’t know”. We do not even know all the questions to be asked. They are both complex systems where – by definition – the complexity lies in the multitude of the processes involved and their interactions.

When climate – which is contained in the 100 m of ocean and 20 or so km thick, turbulent and chaotic atmospheric layer (and which is dimensionally miniscule in relation to the 140 million km of the earth-to-sun system) – is so complacently considered to be “settled science” then we have shifted into the area of faith and soothsaying and prophecies. When climate modellers are smug enough to believe they have understood the climate system and believe that their models are complete, then the models produce outputs which are not forecasts but prophecies. (No doubt soothsayers and shamans have sometimes made accurate prophecies but I still would not buy a used car from one of them)! Weather is in the realm of forecast (though you could argue that the most accurate forecast is still that “the weather tomorrow will be like today”) but climate is not yet there.

This kind of “arrogance” which pervades some of the climate “scientists” is not so prevalent when it comes to the study of Solar Cycles. There is a clear understanding that “we don’t know what we don’t know”. In addition to the 11 year and 22 year cycles, other cycles are hypothesised for 87 years, 210 years, 2300 years (or maybe 2241 or 2500 years) and 6000 years. We have no idea what causes these cycles. Even the 11 year cycle which has been most studied produces  surprises every day but is properly in the area of “forecast” (and hopefully never again will be in the area of prophecy). ….

…… We seem to be in a solar minimum. We may be seeing a 210 year cycle – or maybe not. There are changes to the forecasts not only regarding the maximum level of sunspot activity but also about when it will occur and what the length of cycle 24 might be. There is speculation as to what effect the length of the solar cycle may have on climate – but we haven’t a clue as to what mechanisms may be involved.  This is not to say that there isn’t much speculation and hypothesising. There is a great deal of comment about the effect these changing forecasts may have on global warming or cooling or climate disruption.  In some quarters there is much glee that the forecasts have been “wrong”. Some comments question the intelligence of the forecasters.

But of course the forecasts themselves say nothing about how the behaviour of the sun may impact our climate. They do not pretend to be prophecies or to be statements of inevitable outcomes. All they do say is that we don’t know very much – yet – about the sun. But we do know enough to make some tentative forecasts.

But I am very glad that people continue to be brave enough to make forecasts and I am quite relieved that the forecasts are not spot on. That at least ensures we will continue learning.

Finally — a climate model is revised

January 9, 2013

UPDATE! The important point of this story is not whether global warming has stopped or is continuing or if the world is cooling. Climate will go the way it will. The real significance of this story is that climate models are not just far from perfect – they are plain wrong. And what is worse is that when a model is not borne out by reality, the “politically correct” but false assumptions (such as that man-made CO2 causes significant  warming or that solar effects are minor) are not even reviewed.

This has been doing the rounds for a few days now but the BBC – which tends to be one of the pillars of the Global Warming religion – has finally come round to reporting that the British Met Office has predicted that global temperatures could decrease somewhat over the next decade. Of course it is good to see that a climate model is being revised in the face of reality. Unfortunately most climate models just retain their assumptions and add fudge factors every time reality fails to meet their forecasts where – instead – they ought to be questioning the very assumptions their models are built on. But that loss of face would be too expensive in terms of the funding already flowing into continuing with discredited models and would be too much to take in one go. But the fundamental requirement of good science is that when models don’t fit it is time to question the assumptions in the model – not to find fudge factors.

BBCClimate model forecast is revised

The UK Met Office has revised one of its forecasts for how much the world may warm in the next few years. …. If the forecast is accurate, the result would be that the global average temperature would have remained relatively static for about two decades.

…. Climate scientists at the Met Office and other centres are involved in intense research to try to understand what is happening over the most recent period.

The most obvious explanation is natural variability – the cycles of changes in solar activity and the movements and temperatures of the oceans.

Infographic (Met Office)

Infographic (Met Office): The forecasts are based on a comparison with the average global temperature over the period 1971-2000

Of course the BBC report then goes on to proclaim that this not a global cooling and that global warming will continue.

But of course neither this or any of the other exaggerated models will remove the assumed global warming – man-made carbon dioxide link for which there is no direct evidence whatever.

Tallbloke reported on the story here a few days ago.


%d bloggers like this: