Statistician issues a $100,000 challenge for anyone to prove that global warming is statistically significant

Doug Keenan used to do research on financial trading and is now an independent statistician. He has on a number of occasions pointed out the very poor statistical analysis carried out by “scientists” in drawing unwarranted conclusions. He has challenged the statistics used for example in radiocarbon dating, the use of proxies such as grape harvests for temperatures, early Chinese chronologies based on planetary conjunctions and the like. He has for a long time criticised the “global warming” community for dodgy statistics in dealing with temperature time-series. Most famously he showed that the IPCC 2013 (AR5) report was statistically incompetent.

He has now issued a challenge to the global warming community to prove that their statistical, data manipulation methods are valid and can distinguish between real trends and purely random time-series. He contends that global warming trends are mere assertions and are not statistically justified. He is putting his money where his mouth is and has offered $100,000 for anyone who can do that.

There have been many claims of observational evidence for global-warming alarmism. I have argued that all such claims rely on invalid statistical analyses. Some people, though, have asserted that the analyses are valid. Those people assert, in particular, that they can determine, via statistical analysis, whether global temperatures are increasing more than would be reasonably expected by random natural variation. Those people do not present any counter to my argument, but they make their assertions anyway.

In response to that, I am sponsoring a contest: the prize is $100 000. In essence, the prize will be awarded to anyone who can demonstrate, via statistical analysis, that the increase in global temperatures is probably not due to random natural variation.

Competition

The file Series1000.txt contains 1000 time series. Each series has length 135 (the same as that of the most commonly studied series of global temperatures). The series were generated as follows. First, 1000 series were obtained via trendless statistical models fit for global temperatures. Then, some randomly-chosen series had a trend added to them. Some trends were positive; others were negative. Each individual trend averaged (in magnitude) 1°C/century—which is greater than the trend claimed for global temperatures.

A prize of $100 000 (one hundred thousand U.S. dollars) will be awarded to the first person, or group of people, who correctly identifies at least 900 series: which series were generated by a trendless process and which were generated by a trending process.

Contest entries should be emailed to me (doug dot keenan at informath.org). Each entry in the contest must be accompanied by a payment of $10; this is being done to inhibit non-serious entries.

The contest closes at the end of 30 November 2016, or when someone submits a prize-winning answer, whichever comes first.

When the contest closes, the computer program (including the random seed) that generated the 1000 series will be posted here. As an additional check, the file Answers1000.txt identifies which series were generated by a trendless process and which by a trending process. The file is encrypted. The encryption key and method will also be posted here when the contest closes.

Contestants have until November 2016. However, I don’t expect any current “climate scientist” to rise to the challenge. I don’t expect that the “climate politicians” such as John Kerry to even understand the issue.


Addendum on 18th August 2016 from Doug Keenan:

18 August 2016
A paper by Lovejoy et al. was published in Geophysical Research Letters. The paper is about the Contest.

The paper is based on the assertion that “Keenan claims to have used a stochastic model with some realism”; the paper then argues that the Contest model has inadequate realism. The paper provides no evidence that I have claimed that the Contest model has adequate realism; indeed, I do not make such a claim. Moreover, my critique of the IPCC statistical analyses (discussed above) argues that no one can choose a model with adequate realism. Thus, the basis for the paper is invalid. The lead author of the paper, Shaun Lovejoy, was aware of that, but published the paper anyway.

When doing statistical analysis, the first step is to choose a model of the process that generated the data. The IPCC did indeed choose a model. I have only claimed that the model used in the Contest is more realistic than the model chosen by the IPCC. Thus, if the Contest model is unrealistic (as it is), then the IPCC model is even more unrealistic. Hence, the IPCC model should not be used. Ergo, the statistical analyses in the IPCC Assessment Report are untenable, as the critique argues.

For an illustration, consider the following. Lovejoy et al. assert that the Contest model implies a typical temperature change of 4 °C every 6400 years—which is too large to be realistic. Yet the IPCC model implies a temperature change of about 41 °C every 6400 years. (To confirm this, see Section 8 of the critique and note that 0.85×6400/133 = 41.) Thus, the IPCC model is far more unrealistic than the Contest model, according to the test advocated by Lovejoy et al. Hence, if the test advocated by Lovejoy et al. were adopted, then the IPCC statistical analyses are untenable.


Advertisement

Tags: , ,


%d bloggers like this: