I’ve been complaining about the small Arctic Ice data set in recent posts. Satellites have only been around to measure this stuff since 1979 so a small data set is virtually unavoidable with the knowledge we have today. I suppose I’m not complaining about the data set…but about the way it is used.
We as a society ( the IPCC and their friends) have been making all sorts of judgements about all sorts of things (Polar Bear habitat is my personal favorite) for years now…by relying on small data base sets and extrapolation.
I have never been a fan of extrapolation. When a 32 year data base is extrapolated for hundreds of years….I get nervous.
Allow me to explain. Let’s begin by looking at the University of East Anglia UK global temperature history going back to 1850. Still a small data set, but a bit better than 32 years. I prefer East Anglia data to NASA data as a previous post discusses.
The period of most rapid change in this graph is the period from about 1977 to 1998. Since 1998 it has stabilized at a new higher temperature and has been cooling slightly in recent years. Any data set that uses the period from 1979 to 2000 as a baseline is ignoring recent data. That very small data set is going to generate odd predictions.
Unfortunately, that is exactly what the University of Colorado does every day at their web site on Arctic Sea Ice. Here is a map showing changes in sea ice in 2010 and 2012….and also showing an average (median) amount for the period 1979 to 2000.
I wonder what the ice sheet looked line in ….say 1942 or perhaps 1912. 1942 was at the end of a 30 year warming cycle and 1912 was at the end of a cooling cycle that appears to have begun in about 1879 according to East Anglia University data. Can we really predict anything with certainty about ice melting patterns in a climate system with this much natural temperature variation….particularly by extrapolating small data sets?