Even when we have only a hazy view of reality, it is human nature to try to make sense of the world around us. At some fundamental level the human brain operates as a pattern recognition machine. In general this is an excellent quality, as it allows us to make decisions based on incomplete or noisy data, such as noticing the camouflaged foe hidden in dense undergrowth, or realising that there are similarities in the shape of coastlines on opposite sides of the South Atlantic Ocean. This ability helps keep us alive as well as spurring us to investigate new ideas such as plate tectonics.
But there are times when it leads us astray. Children see scary monsters in floral patterned wallpaper, and conspiracy theorists see faces in the shadows of hills on Mars. The urge to find a pattern where none exists is strong: very strong. We want to know, to understand – not be left in ignorance. And this instinct to find answers in all circumstances is, to my mind, the greatest single motivator behind the failure of IPCC-back ‘consensus’ climate science. There is simply not enough good quality data to work with.
Why do I say this? To start with, I have personal experience that the accuracy of some of the raw data that is used in climatology is nowhere near as accurate as is often claimed. Secondly, the way in which this data is turned into predictions of dangerous global warming relies 100% on computer models, and these models simply don’t work. And finally, the thing that is measured and predicted (i.e. the global temperature) is not the correct thing to be measuring and predicting anyway. Temperature alone doesn’t tell us what has happened in the past and doesn’t govern what will happen in the future. We need other information that simply isn’t available at the moment.
How accurate is the data?
There has been a lot of discussion at blogs such as Climate Etc. and WUWT recently about the accuracy of various methods of analysing global temperatures. A few years ago I would have said that there are major flaws in all of these methods and been very suspicious of their results. However, the more I look at them, the more convinced I become that they do as good a job as is reasonably possible given the fragmentary and changing nature of the data they process.
However, there is an underlying problem that no amount of data processing can cure: the accuracy of the raw data that is gathered. There has been a lot of work done about the problems that arise with badly sited surface stations, or ones that are moved to different locations. My own concern is more with stations in developing countries, which I have visited personally. I have seen some very bad examples of poor maintenance, and the use of uncalibrated thermometers. These failings have been systemic: that is they were caused by problems that would be repeated at other stations, not just the ones I have seen. To suggest that historical temperatures can be known to within fractions of a degree over large swathes of the Earth’s landmass is simply incorrect.
I do not think that the reconstructions we have of 20th century (or older) temperatures are totally wrong. I accept the world was warmer at the end of the 20th century than the beginning, as part of the continuing recovery from the 18th century ‘Little Ice Age’, a recovery which began long before man started adding CO2 to the atmosphere in any real quantity. My point is that if we cannot accurately know what the global temperature has been in the past, we cannot know exactly how it compares to more recent temperatures, and we cannot use that information to generate accurate predictions of what the climate will do next.
The models! The models!
So where do the supposedly accurate predictions of future climate come from? The ones used by the IPCC (and that find their way into high-profile media reports) come from computer simulations called Global Circulation Models (GCMs). There are a number of different GCMs in use at various academic institutions, but they all share the same basic approach.
Supporters of the IPCC consensus repeatedly say that the GCM predictions are based on thoroughly understood laws of physics, that have been fully proven and are used in all sorts of applications of technology that are all around us. If the physics was wrong, your mobile phone would not work.
This is a very effective argument to use, but things simply aren’t as simple as that. It is true that the GCMs are built from ground up using basic laws of physics. But the problem is akin to providing a young child with the same ingredients as a Michelin-starred chef. You have to know how to combine them, and how they interact, to get an edible meal. If computer modelling were really that simple it would be possible to create artificial intelligence by writing software based on the rules of biochemistry alone.
The GCMs are not just programmed with basic laws of physics, but also a plethora of parameters that are subject to all sorts of assumptions and guesswork. The output of the model is then compared to historical data, and the parameters tuned until the model can recreate the past. The hope is that if the model is then allowed to run on beyond the present day, the output is an accurate prediction of what the climate will do next.
An obvious problem with this method is that if the historical data is inaccurate (which it is) then the model will have been tuned incorrectly and the predictions it gives will be wrong. The behaviour of complex systems is often extremely sensitive to the starting conditions, and so small errors become magnified very quickly. Another problem with the GCMs is that, in theory, once they have been programmed a realistic simulation of the Earth’s climate should emerge, complete with major features such as the correct circulation patterns of ocean currents, or the way the overall albedo (reflectivity) of the northern and southern hemispheres of the Earth is somehow kept in balance through a process nobody yet understands. But the GCMs are very bad at simulating processes that haven’t been directly programmed into them. They have also been shown to be totally unable to predict what will happen to the Earth’s temperature, which is what they were designed to do in the first place.
The IPCC attempts to skirt this problem by averaging the output of different GCMs, as if taking the average of a collection of wrong answers could somehow give you the right answer. It can’t.
Measuring the wrong thing?
We have reached the point where we have models that are using inaccurate data, and that do not do what they were designed to do. It doesn’t look good, does it? Well, I have worse news. We aren’t even measuring the right thing to begin with. The IPCC reports, and the associated media coverage, all deal in measuring and predicting global temperature, but temperature alone doesn’t tell us what is actually going on.
It is a long time since I last did thermodynamics, but a very basic principle I learned is that the energy in a water vapour-based system (such as the Earth’s atmosphere) isn’t measured using temperature, but a parameter called enthalpy. In the atmosphere enthalpy depends not only on temperature but also humidity. As the amount of energy (from the sun) in the atmosphere increases, it can either increase the temperature, or the humidity, or both, depending on local conditions. A volume of air that contains a lot of water vapour contains more energy than the same volume of dry air at the same temperature, and measuring the enthalpy would tell us that.
In fact, to see the complete picture we should also include a measurement of wind speed, because energy in the atmosphere can also go into moving the air around, for example in forming a tropical hurricane. There have been some attempts to quantify changes in the climate based upon energy rather than simply temperature, such as Fall et al and Stephenson et al, but they are hampered by a lack of the required humidity and wind speed data. We simply don’t have the data to work out what we need to know.
As an aside, it should be noted that Stephenson et al state in their paper that “Presenting heat content as the primary metric for global warming could lead lay readers to erroneously perceive Australia as cooling – after all, its heat (content) is decreasing. Our concern is not just nomenclature. Heat content by any other name if used as a global warming metric has the potential to imply cooling even in places with increasing temperature simply because the location is becoming dryer.” In other words, if the answer in the correct metric (energy) isn’t what they want, they’ll stick to the old temperature measurements, thank you very much. It beggars belief that such an absurd statement could be published in a scientific paper, but such is the pass climatology has come to.
But I just want to do some science…
So why have we ended up where we are? My guess is that the temptation to find patterns in the noise, to just do some science, has proved too strong. Even if the source data is inaccurate, some people just want to go ahead and use it, hoping that the answers they get will still be useful, or at least justify their preconceptions. Hence the global warming monster is found in the floral wallpaper.
When these answers are seized upon for political reasons, and trumpeted to the skies as vital for the future of humanity, then more research money flows, and so it is natural to keep on refining the previous work. In climatology it can take a long time to be proved wrong, and the whole IPCC-backed global warming jamboree was solidly in place before temperatures stopped rising around 1998. The problem is that the science of global warming has turned into an industry which takes inaccurate temperature data, runs it through massively complex computer models that simply don’t work, and spits out results that anyway aren’t telling us what we need to know. A basic understanding of human nature tells us that this industry isn’t going to just restructure itself overnight.
Judith Curry absolutely hit the nail on the head very recently when she wrote, “Most of climate science is in ‘shut up and calculate’ mode. This is a very dangerous place to be given the substantial uncertainties, ignorance and areas of disagreement, not to mention the problems/failures of climate models.”