The ABC of climate change denial

The ABC chairman Maurice Newman’s thoughts on the reporting of climate change are, I think, symptomatic of the damage that denialism has inflicted. He was interviewed on Wednesday, and appears more than a little ignorant of the state our of climate knowledge, and even a little naïve regarding scientific processes.

Newman says:

My view on any of these topics is to keep an open mind and I still have an open mind on climate change, I have an open mind on a whole range of issues because I think that to have a closed mind leaves you in a position where if you take a strong stance you are likely to be wrong-footed.

And I’ve just made the point that I’ve been around long enough to know that consensus and conventional wisdom doesn’t always serve you well and that unless you leave some room for an alternative point of view you are likely to go down a wrong track.

This is all fine and good as far as platitudes go, and presenting alternative points of view is all very democratic. One can never be completely certain about scientific outcomes, after all.

However, there is a line, somewhere, beyond which we must accept that an assertion (e.g. that we are changing the climate) is sufficiently well-supported to be considered true, and that alternative view points (however well meaning) are so implausible as to be wrong. The truth is not absolute, but neither is it a matter of opinion, and providing “balance” in such situations is grossly misleading.

Newman’s mistake, perhaps, is in assuming that a consensus among scientists is just like a consensus among any other demographic. This rather misses the point of science. Scientists have fought long and hard  – certainly, a lot harder than anyone else – to understand the truth. Science does not just systematically invent evidence and stories to support pre-determined conclusions, as so often happens with political interest groups. Science exists so that we can have at least some people who don’t do this, so that the whole world isn’t just a fantasy land where the laws of physics can be amended by popular vote. Observers of politics may have difficulty swallowing the idea that anyone cares about the actual, real truth, because in politics it’s such an alien concept. This is really a terribly cynical and blinkered view point.

I think that there are points of view supporting what you’ve just said, there are other points of view which will discount that and they come from also eminent positions; these are not cranks. Many of the people who have a different point of view on the climate science are respectable and credentialed scientists themselves.

So as I said, I’m not a scientist and I’m like anybody else in the public I have to listen to all points of view and then make judgements when we’re asked to vote on particular policies.

Here Newman betrays something of an unwillingness to properly investigate the issue. Most of the people who have a different point of view on climate science are most certainly not eminent scientists. Most of them are bloggers (like me). And yes, there are cranks – Lord Christopher Monckton being a particularly spectacular example. Some scientists do fall into the dissenters’ camp, but most of them are not involved in climate science.

It’s interesting to note that, while denialist opinion is usually contrasted against the views of the IPCC, the IPCC’s reports themselves are based on the broad spectrum of views permeating the scientific community. If you’re after some sort of balance, you would do well to remember that alternate views have already been factored in by the IPCC. The only real debate is over the magnitude of climate change and its effects. Those who argue that it isn’t happening, or that we aren’t responsible, or that we can’t change anything, tend to be very light on relevant scientific credentials.

I am an agnostic and I have always been an agnostic and I will remain and agnostic until I’ve found compelling evidence on one side or the other that will move me. I think that what seems fairly clear to me is that the climate science is still being developed. There are a lot question marks about some of the fundamental data which has been used to build models that requires caution.

There are not a “lot of question marks” over this data. There’s simply a lot of hot air coming out of those who read and believe the things that Steve McIntyre and Anthony Watts write. Newman has apparently bought into this sort of disinformation.

It’s highly unlikely that he would even recognise “compelling evidence” if it were presented to him. And why would he expect to, after all? What would he, as a layperson, accept as “compelling evidence” that anthropogenic climate change is real? Does Newman need to personally assess the evidence for other scientific theories as well? What would he accept as compelling evidence that quantum theory accurately describes the universe? What would convince him that a newly-discovered hundred-thousand-year-old skeleton represents a previously-unknown species of human? There is expertise involved in making such judgments. Laypeople like Newman, or indeed myself, cannot presume to be equals in this respect.

In other words, the reason Newman hasn’t seen any compelling evidence is that, in all probability, he doesn’t know what he’s looking for.

This is the subtle, deranged beauty of climate science denialism. Everyone is an expert! It doesn’t matter whether the denialists themselves win over any actual supporters. What matters is that they bring the credibility of science down to the level of punditry, in the eyes of their audience. The denialists succeed by creating agnostics who feel they are above the fray, who don’t even bother to distinguish between scientists and bloggers. I wouldn’t hold this against most laypeople, but for those who should know better, this is outright intellectual laziness disguised as a form of neutrality. Surely the chairman of the ABC has a duty to be better informed.

Climate reporting – compare and contrast

There’s a subtle difference here that I can’t quite put my finger on.

An article in The Register (by Lewis Page):

Agricultural brainboxes at Stanford University say that global warming isn’t likely to seriously affect poor people in developing nations, who make up so much of the human race. Under some scenarios, poor farmers “could be lifted out of poverty quite considerably,” according to new research.

The Stanford University report on which it was (purportedly) based:

The impact of global warming on food prices and hunger could be large over the next 20 years, according to a new Stanford University study. Researchers say that higher temperatures could significantly reduce yields of wheat, rice and maize – dietary staples for tens of millions of poor people who subsist on less than $1 a day. The resulting crop shortages would likely cause food prices to rise and drive many into poverty.

But even as some people are hurt, others would be helped out of poverty, says Stanford agricultural scientist David Lobell.

(My emphasis.)

The Register’s article is a transparent and spectacular case of selective reading. The Stanford report briefly discusses a complex set of effects, some of which are actually positive. The rose-tinted spectacles at The Register apparently have a problem seeing the opening paragraph, and instead treat the report as though it were some sort of vindication of climate inaction.

Climate researchers really can’t win in the face of such wilful distortion. If their research shows that the effects are all negative, they are portrayed as “alarmists”. If their research shows some mitigating factors, then these will be trumpeted as proof that climate change is a “scare”.

The title and subtitle of The Register’s article hint at the underlying attitude:

Global warming worst case = Only slight misery increase

The peasants aren’t revolting – they’ve never had it so good

The world’s poor have “never had it so good”, eh? I’m glad to see such overflowing concern for the less fortunate.

Open source science

Slashdot notes an article from the Guardian: “If you’re going to do good science, release the computer code too“. The author is, Darrel Ince, is a Professor of Computing at The Open University. You might recognise something of the mayhem that is the climate change debate in the title.

Both the public release of scientific software and the defect content thereof are worthwhile topics for discussion. Unfortunately, Ince seems to go for over-the-top rhetoric without having a great deal of evidence to support his position.

For instance, Ince cites an article by Professor Les Hatton (who I also cite on account of his recent study on software inspection checklists). Hatton’s article here was on defects in scientific software. The unwary reader might get the impression that Hatton was specifically targetting recent climate modelling software, since that’s the theme of Ince’s article. However, Hatton discusses studies conducted from 1990-1994, in different scientific disciplines. The results might still be applicable, but it’s odd that the Ince would choose to cite such an old article as his only source. There are much newer and more relevant papers; for instance:

S. M. Easterbrook and T. C. Johns (2009), Engineering the Software for Understanding Climate Change, Computing in Science and Engineering.

I stumbled across this article within ten minutes of searching. While Hatton takes a broad sample of software from across disciplines, Easterbrook and Johns  delve into the processes employed specifically in the development of climate modelling software. Hatton reports defect densities of around 8 or 12 per KLOC (thousand lines of code), while Easterbrook and Johns suggest 0.03 defects per KLOC for the current version of the climate modelling software under analysis. Quite a difference – two orders of magnitude, for those counting.

Based on Hatton’s findings of the defectiveness of scientific software, Ince says:

This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program.

This is a profoundly strange thing for a Professor of Computing to say. It’s certainly true that one single error can invalidate a computer program, but whether it usually does this is not so obvious. There is no theory to support this proclamation, nor any empirical study (at least, none cited). Non-scientific programs are littered with bugs, and yet they are not useless. Easterbrook and Johns report that many defects, before being fixed, had been “treated as acceptable model imperfections in previous releases”, clearly not the sort of defects that would invalidate the model. After all, models never correspond perfectly to empirical observations anyway, especially in such complex systems as climate.

Ince claims, as a running theme, that:

Many climate scientists have refused to publish their computer programs.

His only example of this is Mann, who by Ince’s own admission did eventually release his code. The climate modelling software examined by Easterbrook and Johns is available under licence to other researchers, and RealClimate lists several more publicly-available climate modelling programs. I am left wondering what Ince is actually complaining about.

Finally, Ince seems to have a rather brutal view of what constitutes acceptable scientific behaviour:

So, if you are publishing research articles that use computer programs, if you want to claim that you are engaging in science, the programs are in your possession and you will not release them then I would not regard you as a scientist; I would also regard any papers based on the software as null and void.

This is quite a militant position, and does not sound like a scientist speaking. If Ince himself is to be believed (in that published climate research is often based on un-released code), then the reviewers of those papers who recommended publication clearly didn’t think as Ince does – that the code must be released.

Ince may be convinced that scientific software must be publicly-auditable. However, scientific validity ultimately derives from methodological rigour and the reproducibility of results, not from the availability of source code. The latter may be a good idea, but it is not necessary in order to ensure confidence in the science. Other independent researchers should be able to confirm or contradict your results without requiring your source code, because you should have explained all the important details in published papers. (In the event that your results are not reproducible due to a software defect, releasing the source code may help to pinpoint the problem, but that’s after the problem has been noticed.)

There was a time before computing power was widely available, when model calculations were evaluated manually. How on Earth did science cope back then, when there was no software to release?

Peer review

I’ve stumbled across yet another “ClimateGate” article (by way of James Delingpole), this one going right for the jugular of science: peer review. The author is journalist Patrick Courrielche, who I hadn’t come across until now.

Courrielche argues that peer review is kaput and is being replaced by what he calls “peer-to-peer review”, an idea that brings to mind community efforts like Wikipedia. This has apparently been catalysed by “ClimateGate”, an event portrayed by the denialist community as something akin to the Coming of the Messiah.

Courrielche asserts that peer review is a old system of control imposed by the “gatekeepers” of the “establishment”, while peer-to-peer review is a new system gifted to us by the “undermedia”. Courrielche has very little time for nuance in the construction of this moralistic dichotomy, and clearly very little idea why peer review exists in the first place.

It should be noted from the start (and many an academic will agree) that peer review is a flawed system. It’s well known that worthwhile papers are rejected from reputable journals from time to time, while the less reputable journals have the opposite problem. Nevertheless, there is a widely-recognised need for at least some form of review system to find any weaknesses in papers before publication. It seems obvious that the people best placed to review any given piece of work are those working in the same field. Peer review acts both as a filter and a means of providing feedback (a sort of last-minute collaborative effort). The reviewers are not some sort of closed secret society bent on stamping their authority on science, as Courrielche seems to imply. Anyone working in the field can be invited by one relevant journal or another to review a paper, and it’s in a journal’s best interests to select the best qualified reviewers.

Courrielche sticks the word “review” on the end of “peer-to-peer” so that it can appear to fulfill this function. The premise seems to be that hordes of laypeople are just as good, if not better, at reviewing a given work than those who work in the relevant field. This is really just thinly-veiled anti-intellectualism. How can a layperson possibly know whether the author of a technical paper has used the appropriate statistical or methodological techniques, or considered previous empirical/theoretical results, or made appropriate conclusions?

That’s why papers are peer-reviewed. Reputable journals get their reputation from the high quality (i.e. usefulness and scientific rigour) of the work presented therein, as determined by experts in the field. Barring the very occasional lapse of judgment, the flat earth society, the intelligent design movement, the climate change denialists, and any number of other weird and wonderful parties are prevented from publishing their dogma in Science, Nature and other leading journals. There’s no rule forbidding such publication; that’s just what happens when you apply consistent standards in the persuit of knowledge. Ideologues are frequently given an easy ride in politics, and it clearly offends them that science is not so forgiving.

However, Courrielche appears to be more interested in describing how the “undermedia” is up against some sort of vast government-sponsored conspiracy to hide the truth. His tone is one of rebellion, of exposing the information to the media, and doing battle with dark forces trying to prevent its disclosure. Even if such a paranoid fantasy were true, it has nothing to do with peer review. Peer review is not a means of quarantining information from the public, but simply a way of deciding the credibility of that information. In reality, the information is already out there, and in fact it’s always been out there (just not necessarily in the mass media). The problem is not the lack of information, but the prevalence of disinformation. We are all free to ignore the information vetted by the peer review system, but we don’t because it’s intrinsically more trustworthy than anything else we have.

Courrielche makes mention of the “connectedness” of the climate scientists, as if mere scientific collaboration is to be regarded with deep suspicion. Would he prefer that scientists work in isolation, without communicating? This is quite blatantly hypocritical, because his peer-to-peer review system is based on connectedness.

Well, sort of. I also suspect that most of the many and varied denialist memes floating around have not resulted from some sort of collective intelligence of the masses, but from a few undeserving individuals exalted as high priests by certain ideologically-driven journalists. There is nothing “peer-to-peer” about that at all.

From my point of view, what Courrielche describes as the “fierce scrutiny of the peer-to-peer network” is more like ignorant nitpicking and groupthink. There are no standards for rigour or even plausibility in the many of the discussions that occur in the comments sections of blog sites. Free speech is often held sacrosanct, but free speech is not science.

The denialists are up against much more than a government conspiracy. They’re up against reality itself.

Admit me to the conspiracy

Deltoid takes a look at a piece of code taken from the Climate Research Unit (CRU) that apparently has the denialists salivating. Buried therein is the following comment: “Apply a VERY ARTIFICAL [sic] correction for decline!!” Are you convinced yet of the global leftist socialist global warming alarmist conspiracy?! I certainly am.

I’d also like to apply for membership. You see, trawling through my own code for handling experimental data (from September 2008), I’ve re-discovered my own comment: “Artificially extends a data set by a given amount”. Indeed, I appear to have written two entire functions to concoct artificial data*, clearly in nefarious support of the communist agenda. I therefore submit myself as a candidate for the conspiracy. The PhD is only a ruse, after all. Being a member of the Conspiracy is the only qualification that really counts in academia.

* I’m not making this up – I really do have such functions. However, lest you become concerned about the quality of my research, this artificial data was merely used to test the behaviour of the rest of my code. It was certainly not used to generate actual results. I can sympathise with the researcher(s) who leave such untidy snippets of  code lying around, and I’m a software engineer who should know better!

Climate: ‘mission accomplished’

I read with ever growing fascination the comments that continue to flood into climate-related blogs. Deltoid has collected a few truly astounding ones. I’ve also discovered the UK’s very own James Delingpole, who’s a riot. As mentioned in my previous post, there seem to be a veritable army of those convinced that the climate sceptics are not merely right (and righteous), but that this time they’ve actually, truly won. This, based on an assortment of stolen email.

In the long run, reading these comments is probably a recipe for the development of psychological issues, but for the moment it’s like a spectator sport. While ignorance regarding the climate change science is merely frustrating, the euphoric surety of ultimate victory that so many commenters share is hilarous. As a general rule, I don’t like laughing at other people, but when so many start running at full pelt toward the cliff edge, convinced that it is they who are to inherit the Earth, I cannot help but anticipate schadenfreude. I can’t do anything about it, after all, so why not laugh?

(Doubtless, to someone not familiar with the issue, I myself might be sounding a little overconfident. To assuage such doubts, you would do well to remember that the reality of climate change is propounded by the world’s scientific community, which is constantly engaged in critical self-examination. By contrast, the opponents of the International Panel on Climate Change (IPCC) have very few actual scientific results to draw from in support of their arguments. Having long since been consigned to scientific irrelevance, they resort to reading other people’s email in search of conspiracies.)

But why are so many stampeding over the edge all at once? My theory is that so little motivation or desire exists for critical thought that commenters feed on each other ad infinitum. They come to believe, for instance, that there has indeed been widespread scientific fraud, based on existing angry comments, which themselves were derived from still older comments, etc. Eventually we find ourselves back at the source of the allegations – the use of the phase hide the decline in one of the emails (which in reality has a much more innocent explanation*). The newer commenters aren’t aware that these three little words are the entire basis of the supposed fraud. They think their arguments are much more solidly grounded, simply because everyone is talking about it.

The other piece of the puzzle is the ideology of those who spread the word in the first place. Opposition to action on climate change – as put forth by Andrew Bolt, and of course many others around the world – starts to make some kind of twisted sense if you accept the following fact. There are people out there for whom the greatest and most insideous evil in the world is not war, poverty, disease, starvation or tyranny, but simply the fact that you are required to help fund public services. This is their antichrist – taxation – the worse imaginable horror that the universe could bestow on us. My intuition fails me here, but however untenable the premise, the logic thereafter seems to hold. It is an article of faith that none of the consequences of climate change can outweigh the evil of taxation. Indeed the proposition that we should deal with climate change by introducing emissions trading schemes – seen by some as a form of tax – must place the issue firmly in the socialists-taking-over-the-world basket.

I sense that this deeply-held belief serves to justify intellectual dishonesty in the minds of climate change deniers. This might be analogous to the obligation felt by creationist pundits to argue against evolution, not because they feel the evidence is in their favour (as their followers do), but because they perceive the science to be a moral challenge to their beliefs.

* The “hide the decline” hysteria is one of my favourite pieces, actually. I shall attempt to summarise, based on some very patient explanations by Gavin Schmidt, a climate scientist at NASA. The “decline” refers to the “divergence problem”, where temperature reconstructions based on tree-ring data show a spurious decline after about 1960. This needs to be “hidden” simply because it’s not real. Several important points to note are:

  1. The comment cannot possibly be connected to the fabled “cooling” of temperatures this decade, since the email was sent in 1999.
  2. The collection of tree-ring data is a relatively peripheral issue to climate change, since other data sources are available (including actual temperature measurements).
  3. We know that the tree-ring data is reasonably accurate before 1960 and inaccurate after 1960, because we can compare it to other sources of data. Actual temperature measurements, for instance, certainly do not show a “decline”. The reasons for the divergence are the subject of debate, but may be a result of climate change itself.

Update (7 December 2009) – A couple more points, for the sake of completeness:

  1. Nothing has actually been “hidden”, in the lay sense, anyway. All the data is out in the open and the problem has been discussed in the peer-reviewed literature over a decade ago.
  2. According to the email (which you can Google for yourself), the only action taken was the addition to the data of real temperature values. The sources of these values are even described in the email.

Climate conspiratology

Climate denialism has taken a turn for the worse. I say this with great trepidation, of course, because it was never an especially pretty sight to begin with.

A substantial number of private emails from the Climatic Research Unit (CRU) at the University of East Anglia have been retrieved and published online without permission*. One hardly needs to read between the lines: the hackers were presumably looking for the “smoking gun” that would prove some kind of conspiracy on the part of climatologists. Real Climate are methodically refuting all the miscellaneous scraps of hysteria that seem to have been whipped up over this.

However, observe some of the comments at the bottom of this blog post and you’ll get a feel for the way this incident is being perceived. Many of the denialist fraternity (and it’s still early days) have apparently decided that this is it; that this is the clincher. They feel confident that it’s all over, that even the dreaded “mainstream media” (MSM) can’t ignore it, and generally that the tide of history has swung in their favour. (This is the result of some interpretation on my part.)

It’s not the hubris that bothers me particularly, but where this is leading the public debate. The IPCC, the world’s other scientific institutions and science in general will all carry on as if nothing had happened, because of course in reality it hasn’t. The notion of a climatologist conspiracy is extraordinarily bizarre and improbable, and as such would require an extraordinary body of evidence to demonstrate its existence. If there was to be a “smoking gun”, it would need to be strong evidence of the systematic fabrication of evidence on a scale that would beggar belief. It would also beggar belief that such a venture could have been kept secret up until now, considering how widespread it would need to be. This is the same problem that most conspiracy theories face. Nothing remotely approaching the requisite level of evidence has been discussed so far, and yet there is a sense in some quarters that the conspiracy has been cracked wide open.

What happens when the denialists realise that nothing is going to change, having already convinced themselves that “The Truth” has been well and truly exposed? Will they then perceive an even greater global conspiracy, with the power to make the world ignore what is sitting in plain sight (as occurs in Orwell’s Nineteen Eighty-Four)? How far down the rabbit hole will they go?

More importantly, how will the world’s politicians react, particularly with Copenhagen around the corner? Will they see this stunt for what it is and ignore it, or will they perceive some increased political risk in taking action? Or will more be sucked into believing the conspiracy themselves?

* I haven’t downloaded the emails for myself, because frankly I don’t believe I have either a legal or moral right to do so.

Also note: as you’ll be aware, I’ve not been keeping up with my regular blogging, owing to other commitments. I hope to become more prolific with my postings in the future, but that may be several months away.

Software defect costs

In my persuit of software engineering data, I’ve recently been poring over a 2002 report to the US Government on the annual costs of software  defects. The report is entitled “The Economic Impacts of Inadequate Infrastructure for Software Testing“. Ultimately, it estimates that software defects cost the US economy $59.5 billion every year.

Modelling such economic impacts is an incredibly complex task, and I haven’t read most of the report’s 309 pages (because much of it isn’t immediately relevant to my work). However, since trying to use some of the report’s data for my own purposes, certain things have been bothering me.

For instance, the following (taken from the report):

nist_table

This table summarises the consequences to users of software defects (where “users” are companies in the automotive and aerospace industries).

Strictly speaking, it shouldn’t even be a table. The right-most column serves no purpose, and what remains is a collection of disparate pieces of information. There is nothing inherently “tabular” about the data being presented. Admittedly, for someone skimming through the document, the data is much easier to spot in a table form than as plain text.

The last number piqued my curiosity, and my frustration (since I need to use it). What kind of person considers a $4 million loss to be the result of a “minor” error? This seems to be well in excess of the cost of a “major” error. If we multiply it by the average number of minor errors for each company (70.2) we arrive at the ludicrous figure of $282 million. For minor errors. Per company. Each year.

If the $4 million figure is really the total cost of minor errors – which would place it more within the bounds of plausibility – why does it say “Costs per bug”?

The report includes a similar table for the financial services sector. There, the cost per minor error is apparently a mere $3,292.90, less than a thousandth of that in the automotive and aerospace industries. However, there the cost of major errors is similarly much lower, and still fails to exceed the cost of minor errors. Apparently.

What’s more, the report seems to be very casual about its use of the words “bug” and “error”, and uses them interchangeably (as you can see in the above table). The term “bug” is roughly equivalent to “defect”. “Error” has a somewhat different meaning in software testing. Different definitions for these terms abound, but the report provides no definitions of its own (that I’ve found, anyway). This may be a moot point, because none of these terms accurately describe what the numbers are actually referring to – “failures”.

A failure is the event in which the software does something it isn’t supposed to do, or fails to do something it should. A defect, bug or fault is generally the underlying imperfection in the software that causes a failure. The distinction is important, because a single defect can result in an ongoing sequence of failures. The cost of a defect is the cost of all failures attributable to that defect, put together, as well as any costs associated with finding and removing it.

The casual use of the terms “bug” and “error” extends to the survey instrument – the questionnaire through which data was obtained – and this is where the real trouble lies. Here, potential respondants are asked about bugs, errors and failures with no suggestion of any difference in the meanings of those terms. It is not clear what interpretation a respondant would have taken. Failures are more visible than defects, but if you use a piece of buggy software for long enough, you will take note of the defects so that you can avoid them.

I’m not sure what effect this has on the final estimate given by the report, and I’m not suggesting that the $59.5 billion figure is substantially inaccurate. However, it worries me that such a comprehensive report on software testing is not more rigorous in its terminology and more careful in its data collection.

The colloquium

An “official communication” from early June demanded that all Engineering and Computing postgraduate students take part in the Curtin Engineering & Computing Research Colloquium. Those who didn’t might be placed on “conditional status”, the message warned.

A slightly rebellious instinct led me to think of ways to obey the letter but not the spirit of this new requirement. Particularly, the fact that previous colloquiums have been published online introduced some interesting possibilities:

  • a randomly-generated talk;
  • a discussion of some inventively embarrassing new kind of pseudo-science/quackery; or
  • the recitation of a poem.

In the end I yielded, and on the day (August 25) I gave a reasonably serious and possibly even somewhat comprehensible talk on a controlled experiment I’d conducted on defect detection in software inspections.

A while afterwards, I received in the mail a certificate of participation, certifying that I had indeed given the talk I had given. It felt a little awkward. Giving a 15 minute talk isn’t something I’d have thought deserving of a certificate. It might be useful for proving that I’ve done it, since it now appears to be a course requirement, but a simple note would have sufficed.

Interestingly, I later received another certificate, identical except that my thesis title had been substituted for the actual title of my talk. In essence, I now have a piece of paper, signed personally by the Dean of Engineering, certifying that I’ve given a talk that never happened.

Freedom of obfuscation

I have regrettably discovered that my old faithful source of technology news (which I haven’t paid much attention to in recent years) is engaging in one of those enlightening let’s-all-laugh-at-the-scientists climate change denialism campaigns.

This article in The Register caught my attention today, and made me despair a little. Andrew Orlowski reports light-heartedly on a freedom of information (FoI) crusade by Steve McIntyre, who runs the Climate Audit website and who is frequently cited, quite falsely, as having discredited the hockey stick graph (the one showing global temperatures over the last 1000 years with a dramatic spike at the end). McIntyre is actually an academic, which at least sets him aside from the likes of Viscount Monckton and other more political protagonists, but he certainly isn’t a climate scientist.

The issue at stake is the availability of raw temperature data, as opposed to the aggregated, processed datasets put together by the Climatic Research Unit (CRU), of which Phil Jones is the director. This Nature blog post sheds more light on the nature of the dispute between McIntyre and Jones; more than you will be exposed to by reading The Register’s article at any rate.

McIntyre, unlike his hangers-on, seems to define his objective very precisely: the free availability of the raw temperature data. To this end, McIntyre appears to have encouraged (or possibly orchestrated) a barrage of FoI requests to Jones, who Orlowski describes as an “activist-scientist” (a term I would consider quite an insult).

Orlowski’s article appears to have been informed by little more than a perusal of McIntyre’s blog. He must have left his journalistic scepticism in his other trousers.

First, Orlowski claims that the CRU has “lost or destroyed all the original data”. This is both factually incorrect and highly misleading, even if you accept McIntyre’s version of events. The CRU says it faced storage constraints in the 1980s, meaning that some of the older original data could not be preserved. This is hardly implausible – scientists still face storage issues today, and will still face them decades from now, McIntyre’s personal incredulity notwithstanding. Furthermore, the CRU doesn’t own the original data, and says that due to agreements with those who do, it cannot release what raw data it does have.

Besides – and this is what I find most astonishing – Orlowski himself notes two things:

  1. McIntyre already has the raw data. This apparently occurred through some sort of FTP security lapse at the CRU, which was then fixed in what McIntyre describes – in excruciating detail, as if the tanks were rolling into Washington DC – as an “unprecedented data purge”.
  2. McIntyre “doesn’t expect any significant surprises after analysing” it.

That would seem to indicate that, through all the bluster, there is actually not even the pretence here that anything is wrong with the IPCC’s climate projections. It’s presented (by both Orlowski and McIntyre) in a fashion that suggests some sort of cover-up or conspiracy, and so that’s what some readers will doubtless believe. In fact, such an allegation has been downplayed by the one person apparently best placed to make it.

The free availability of data is, I believe, a worthy cause – let’s not make light of that. According to the Nature blog post, Jones wants this as well. However, McIntyre’s own blog makes his FoI campaign look more like a vindictive assault than a fight for principles. Orlowski’s article looks more like an Andrew Bolt post than an attempt at journalism.