Karl et al., once again

I’m just going to pick up here from the update posted at the top of the previous post. Read the previous two posts if you need the background on what I’m doing.

I was fortunately able to get the global mean annual values for NOAA’s ERSST version 4 data from Gavin [edit: this must actually be merged land and ocean, or MLOST, not ERSST data]. Here’s the same analysis I did previously, using those data. I also realized that the data at the NOAA web page included year-to-date value for 2015 (which so far is apparently record warm). I removed that year here. As before, no significance testing here, which is trickier than might appear, or that I have time for. (Note that Karl et al. did not test for significance either).

First, here’s a graph of the two time series (previous ERSST version = black, new data = blue). The graph appears to be identical, or nearly so, to Karl et al.’s Fig 1a:
ERSSTv4_2

This gets interesting here. Visually, there’s clearly little difference between the two series; their correlation is about 0.995. But…when I run the same analysis on the new data that I did on the previous version, the results (right-most two columns) are very different indeed:

   Start1 End1 Start2 End2  Slope1 Slope2  Ratio Previous
2    1911 1997   1998 2012 0.06933 0.0855 1.2332   0.4995
3    1911 1997   1998 2014 0.06933 0.1063 1.5325   0.8170
4    1911 1999   2000 2012 0.07252 0.0920 1.2685   0.5422
5    1911 1999   2000 2014 0.07252 0.1162 1.6018   0.9033
6    1931 1997   1998 2012 0.06517 0.0855 1.3120   0.5821
7    1931 1997   1998 2014 0.06517 0.1063 1.6305   0.9522
8    1931 1999   2000 2012 0.07071 0.0920 1.3011   0.5999
9    1931 1999   2000 2014 0.07071 0.1162 1.6430   0.9995
10   1951 1997   1998 2012 0.10488 0.0855 0.8152   0.3647
11   1951 1997   1998 2014 0.10488 0.1063 1.0131   0.5966
12   1951 1999   2000 2012 0.11211 0.0920 0.8206   0.3797
13   1951 1999   2000 2014 0.11211 0.1162 1.0363   0.6327
14   1971 1997   1998 2012 0.17069 0.0855 0.5009   0.2162
15   1971 1997   1998 2014 0.17069 0.1063 0.6225   0.3536
16   1971 1999   2000 2012 0.17887 0.0920 0.5143   0.2274
17   1971 1999   2000 2014 0.17887 0.1162 0.6495   0.3788

All of the ratios are now higher; that is, there is less of a difference in slope between the post- and pre-breakpoint time periods. This is regardless of start, break, or end years. Most of the ratios are now > 1.0; before, none of the them were so. For the 1951 start date emphasized the most by Karl et al., and ending in 2014, there is near equality in slopes, just as they state. If one ends instead at 2012, the recent interval is just over 80% of the earlier, much higher than using the previous version data, where it was 36-38%. Choice of start year has a large effect: the highest ratios arise using a 1931 start and the lowest from a 1971 start. A 1971 start date gives the largest discrepancy in rates among any of the four tested; there has clearly been a slowdown if one starts there, and there’s just as clearly been an increase if one starts from 1931. It washes out as a draw if one starts from 1951.

The only point of real contention now is the last sentence in the paper: “…based on our new analysis, the IPCC’s (1) statement of two years ago – that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years” – is no longer valid. Comparing a 30 to 60 year interval with a sub-interval of it, is not the proper comparison. You have to compare non-overlapping intervals, and if you start those from 1971, then yes there definitely has been a slowdown, based on these data.

What’s interesting–and unexpected–to me, is how such a very small difference in the two time series can have such a large impact on the ratio of warming rates in the two time periods. When I first graphed the above, my first thought was “nearly identical, not going to affect the rate ratios much at all”.

Wrong!

p.s. On the NOAA data issue–I noticed that one can in fact get version 4 data from NOAA, just not the spatio-temporally aggregated versions, which as mentioned, have incorrect links to the previous version. You have to be willing and able to do your own aggregating, but if you are, you can get both ASCII and NetCDF format, by month and grid cell.

Advertisements

3 thoughts on “Karl et al., once again

  1. After reading these three posts, my mind flashed to baseball. Not because they were boring! Nowadays, Sabremetrics is a huge industry and there are websites where you can read very serious scientific (well, statistical anyway) papers describing how new metrics like xFIP accurately predict yada yada yada. But baseball remains highly unpredictable. It is highly demoralizing to see a Climate Sabremetrics paper make it in to Science!

    • Oh they were definitely boring Matt 🙂

      A couple years ago I was reading a lot of the sabermetric stuff and was not impressed with most of it, exactly for the reason you state. Only when I started looking at the pitch velocity and trajectory data that started in 2009 did I see something really worthwhile. You can talk all you want about WAR and xFIP and BABIP, etc., but the most fundamental issue in baseball remains how squarely the batter makes contact with the ball. Everything else is secondary, although I will agree that a lot can be learned from simulation, such as with batting orders and offensive strategy. Everything else is +/- lace around the edges.

      Lots of rationalizations and weird behavior around the whole temperature issue and latest paper. As we know, there’s a contingent that loves to talk up the “consensus” in climate change science. Then, when the field cannot even agree on the most basic of issues–how fast the planet has been warming over the last century–because they obviously haven’t first worked out just which data and statistical methods to use, or haven’t gotten around to calibrating different sources of utterly critical data against each other until just recently, they wonder why people question their statements and conclusions. There’s certainly no agreement among them regarding whether there’s been a slowdown or not–that’s evident from watching the discussion on blogs and Twitter.

  2. “…when the field cannot even agree on the most basic of issues…”

    No structure, no discipline. AGW is a free-for-all. As an engineer who has spent his entire career performing carefully structured, objective attribution studies, I am frankly appalled. The Karl paper is yet another “fix the failed prediction” effort and it is just a waste of time. For the first time in the history of mankind, a large contingent of seriously capable climate scientists worked (kinda) together to attempt to predict the temperature trajectory in the future. They missed the mark somewhat, which is what generally happens the first time you try to predict something. The miss was not so bad, really, and there are a few good ideas about why they missed the mark. So you admit there were things you did not understand well enough and you try again. In this case, showing a monotonic rise in temperature is supposed to divert attention from the fact that the models are running hot. But no one predicted a monotonic rise anyway!

Have at it

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s