April 27, 2009
Now who was it denying reality?
FROM-Jennifer Marohasy
Pondering Problems with Computer Climate Models: A Note from Michael Hammer
SCIENTISTS have put a huge amount of effort into generating computer models of our climate system. These models are very sophisticated and complex and their outputs suggest that increasing carbon dioxide will lead to significant temperature rises for our planet. Indeed the model outputs now represent the main evidence in support of the anthropogenic (man induced) global warming hypothesis. Why shouldn’t we take careful note of these results?
Computers are a tool allowing many calculations to be done extremely rapidly. If we can describe a system we wish to explore via a set of interrelated equations we can then get a computer to repeatedly solve these equations with a small assumed time increment between each set of solutions and do it quickly. The output describes the future as predicted from the input equations. This is a computer model. It is important to remember that the model output is completely and exclusively determined by the information encapsulated in the input equations. The computer contributes no checking, no additional information and no greater certainly in the output. It only contributes computational speed.
In cases where the problem domain is comprehensively and accurately understood it is possible to construct a set of equations which very closely mirrors reality and in these cases model outputs can become quite reliable. Finite element analysis and electronic simulation packages are good examples. Even here though, a seemingly trivial error in input such as a single small missing factor can often completely invalidate the output and while the models are used extensively for optimisation the final optimised output is usually checked against reality before being accepted.
In cases where understanding of the problem is incomplete and uncertain, a large amount of information needs to be “estimated” or is simply omitted and this very rapidly causes model outputs to become extremely questionable. Where models are used to explore a theory, very often the input data whether deliberately or not becomes heavily contaminated by the theory. For example, the researcher is convinced something has a very strong effect so he makes the co-efficient for that parameter large or he is convinced another effect is negligible so he omits it completely. Such a computer model is little more than the theory itself expressed in a different form. More...
Evidence is independent information which can be used to support or destroy a theory. The sort of model described above is not independent of the theory. Thus to use the output from this sort of computer model to check a theory is akin to using the theory to prove itself, an invalid circular argument. On the other hand, comparing the output of such a computer model with reality is equivalent to comparing theory predictions with reality and this can indeed be valid evidence.
It is crucial in evaluating climate models to determine the accuracy and comprehensiveness of the input data. We can only include factors we know about and since we don’t anywhere near everything there is to know about climate there must be many omissions. How extensive or significant these omissions are we have no way of knowing. We also know there are things we do know about that are not included in the models. For example Henrik Svensmark and Eigil Friis-Christenen from the Danish National Space Centre have carried out research which claims to show a link between rising solar magnetic activity and rising temperature. In fact, their data shows exceptionally close correlation between cosmic ray flux and global temperature. This is not included in the model inputs. Even for the causative factors we do know about and include there is still the question of how large is the impact of each factor. In many cases, for the climate models the coefficients specified for each factor are little better than informed guesses. For example, at present modellers have assumed that clouds provide positive feedback and that is what is embodied in their input data. Yet there is significant evidence that cloud feedback could be quite strongly negative. Thus not only is the magnitude of the co-efficient in question but even the sign is in question.
So what is the impact of leaving out a factor from a model or having a wrong co-efficient? It could be minor or it could be profound. Lets take a completely hypothetical example, imagine the Svensmark effect contributed say half the warming of the 20th century and carbon dioxide the other half. Leaving solar effects out would lead the model to predict half the observed warming. But scientists adjust their model against known data to get the best possible match. In this case that would lead to the coefficient for carbon dioxide being increased to twice the correct value implying a very large impact from carbon dioxide. So adjusted, the model appears to match reality quite well over the limited period from 1975 to 1998. However what about if solar magnetic activity falls (as now appears to be happening)? In that case real world data would show a rapid decrease in warming (or even cooling) while the model output continues to show strongly rising temperatures. In short the two errors cease to cancel and instead add to each other.
Thus, errors or omissions in the input may be masked by compensating errors in coefficients of known factors especially over the data set used to calibrate the model. However, when the model is used to predict the future and that prediction is later compared with observed reality significant differences start to show up.
So can we ever really know if a model is a reasonable reflection of reality and thus a sound basis for predicting the future? The only real way is to use the model to predict the future and then with the passage of time compare that prediction with the real world. The longer the two agree the more faith one can reasonably put in the model. On the other hand if the two disagree substantially, model credibility is rapidly eroded. This process is often called model validation.
Such model validation is of course being done all the time on the output from climate models. The trouble is that the model outputs depart very rapidly and significantly from reality and basically have failed all validation tests. This would lead one to have no faith in climate models just as the skeptics claim. AGW advocates respond to that by claiming that the models are focussed on predicting climate not weather. Weather is about short term events and is highly chaotic in nature. Climate is about longer term trends and is presumably less chaotic. Since the comparisons have been over relatively short time scales that explains the mismatch.
This is an interesting claim. Nature obeys rigid immutable laws and thus is predictable given sufficiently detailed knowledge. So the AGW claim really amounts to an admission that the models are incomplete and the lack of detail prevents prediction at the scale of days but may not prevent prediction at the scale of decades or centuries. Such claims are not entirely unreasonable, averaging eliminates the impact of many complex short term variables, which are hard to quantify ,and thereby much of the chaotic nature of weather. Climate can be considered as just long term average weather and maybe the models are good enough for reliable long term average predictions. So it may not be reasonable to expect climate models to predict day to day weather. In that case, over what time scale does it become reasonable to test the model output?
The northern hemisphere has just gone through a particularly cold severe winter. I have not seen any suggestions the models predicted that and I am very confident I would have heard loud and clear if they had. Thus one has to suppose time scales as short as a year are still too short. Going further, the climate has been cooling now for 7 years and was static for 4 years before that yet the models predicted continuing strong warming over the same period. This is much more serious for a number of reasons. Firstly one would very realistically expect a climate model to offer accurate predictions over a time scale of a decade. The fact that they clearly haven’t begins to suggest they are simply wrong. This is particularly the case when one considers that the entire AGW hypothesis is based on only 23 years of warming (1975-1998). Before then the earth was cooling. If 1 decade is not long enough to differentiate between weather and climate then why would one suppose a bit over 2 decades was so dramatically better that it could form the foundation of the entire AGW hypothesis. Maybe the period from 1975 to 1998 was also just an example of random chaotic weather.
Another problem is that according to the claims of AGW advocates we are running out of time. We supposedly don’t have several more decades to see if the model outputs are correct. Yet today’s data gives no basis what so ever for believing the model outputs represent anything approaching reality or form a reasonable basis for deciding future policy.
An alternative is theoretical analysis from first principles. This is something which in my own small limited way I am trying to do, as are others. Another way is to recognise that nature works by rigid adherence to natural law and is thus repeatable. This means that we can look back through the historical record for similar situations. If we can find such situations then the response that occurred then is a good indicator of what might happen in our immediate future. This was exactly the basis for the original carbon dioxide driven global warming claims. Vostock Ice core data showed carbon dioxide levels and temperature rising and falling together. Cause and effect was claimed, proof that rising carbon dioxide caused temperatures to increase. Since then, more accurate dating has shown temperature rises or falls 800 years before carbon dioxide responds. This negates the original claims since it shows that rising temperature causes rising carbon dioxide not vice versa.
The most reliable data is of course the most recent. Over the last century, from 1900 to 1940 we had significant warming with very little increase in carbon dioxide levels. From 1940 to 1975 we had strong cooling while carbon dioxide levels were increasing rapidly, from 1975 to 1998 both carbon dioxide and temperature were increasing significantly and now from 1998 to 2009 we have cooling while carbon dioxide levels continue to increase. That represents 40 years of no correlation, 46 years of negative correlation and 23 years of positive correlation. What possible rationale justifies claiming 23 years of positive correlation proves the AGW theory while ignoring 86 years of negative or zero correlation.
There are many others who are going back further into Earth’s historical record with more skill than I have. The reports I have seen suggest firstly that current temperatures are in no way abnormal, secondly that current carbon dioxide levels are also in no way abnormal and most importantly that the historical record shows little cause and effect between carbon dioxide level and temperature. This is significant evidence that needs to be evaluated carefully with a cool unbiased head. I would also suggest that the theoretical analysis from first principles can give very high quality answers and also warrants more attention.
*****************
Michael Hammer graduated with a Bachelor of Engineering Science and Master of Engineering Science from Melbourne University. Since 1976 he has been working in the field of spectroscopy with the last 25 years devoted to full time research for a large multinational spectroscopy company in Melbourne.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment