Severe analytical problems in dendroclimatology, part nine: The PNAS review

It’s unfortunate that I have to do this, but I do, at least partly because I said I would, but mainly because it just can’t go unsaid.

Late last July I submitted a very long and detailed manuscript to PNAS (Proceedings of the National Academy of Sciences) on my discovery of certain severe and previously undescribed analytical problems in the estimation of long term climate trends from tree rings.  This manuscript was rejected without possibility of revision.  This decision was based on a completely irresponsible and dismissive review, and I therefore appealed the decision with point by point counters to the reviewers’ points; this was also rejected, entirely dismissed in fact.  In the exchange of emails that occurred during this process, I informed the PNAS Editor in Chief that, due to the illegitimacy of the review, including the complete lack of response to the issues raised in the appeal, that I would make the details of the review process publicly known.  This post is a partial fulfillment of that promise.

When you invest your entire life into a piece of work, because you’re onto something fairly important on a fairly important topic, and then get returned a review in which the reviewers demonstrate that they either didn’t understand what you did, or worse, present no evidence that they in fact even fully and fairly tried to do so, and yet summarily reject what you did with invalid reasons, and under suspicious circumstances no less… well there may in some quarters be a slight tendency to get just a trifle irritated with that kind of behavior.  Especially when it comes from the National Academy of Sciences of the United States.

A few background items to understand here.  First, the manuscript represents a tremendous amount of work over a period of over two years.  I worked on no other research project the entire time, putting everything else on hold, including a large chunk of my life in general.  It is easily the most difficult piece of work I have done to date, including my dissertation, and quite possibly the most difficult thing I will ever do.  Second, there was no funding for the research–it was all done at my own expense.  Third, the rejection without revision occurred just after the IPCC’s August 1 deadline for initial submission of any manuscripts that can be discussed and cited in the upcoming “AR5” climate assessment report.  Therefore, the AR5 now does not have to consider the issues I raised, and one can be quite confident that the reviewers and handling editor would have been well aware of this fact, since the IPCC Assessment Report is by far the most important climate document in the world.  I can’t be sure that this was a reason for their outright rejection, but I’d be more than willing to bet on it.  Fourth, I know who one of the reviewers was (because they signed their review), the organization this individual works for, and some background on various activities conducted by members of that organization over the last decade or more.  Nor am I alone in that knowledge.

One of the main reasons I submitted to PNAS was because, alone among the high profile, multi-disciplinary journals, it has a policy of allowing for a “pre-arranged” handling editor specifically tasked to make sure that any paper considered controversial, or challenging to the conventional wisdom, will receive a fair hearing.  That person is supposed to be a NAS member and has to be knowledgeable in the topic area. My manuscript certainly qualified, so I followed the PNAS instructions on how to proceed: you email those individuals directly and ask them if they will serve in that capacity.  There were only two NAS members with the necessary subject matter knowledge.  I emailed the first one, Wallace Broecker, and he did not respond.  Two or three weeks later I emailed him again and he then replied with one sentence that he would not do it.  So I emailed the second person, Johathan Kutzbach.  His reply was almost identical to Broecker’s; no explanation in either case, as if I had asked them to go administer to lepers.  So I then emailed the PNAS editorial office explaining the situation and asking what to do.  Someone there replied that I could nominate several non-NAS members to fulfill the role.  I replied that this seemed acceptable, even though I would not know who they would choose, or even if they would in fact choose any of them.  So be it.

So, I submitted my manuscript, with a list of three or four suggested people I thought could fairly fulfill this “Prearranged Editor” role, based on my previous experience.  The other reason I chose PNAS was because they allow for a Supplemental Information section in which you can place all extra information that you cannot fit into their eight page length limit in the main paper.  I had an enormous amount of material, so this was absolutely essential.  There were hundreds of graphs and data results files.  In fact I had to put most of those on my personal website, with a link to them in the paper, because they were too extensive for the manuscript uploading system to handle, even including the supplemental information files.

The work itself was a large simulation experiment in which I systematically varied several potentially important tree growth and age structure parameters to create 192 distinct tree ring data sets, each set analogous to the sampled trees at a typical, single sampling site.  I then detrended the ring series in each set, using five different, existing algorithms, including the four most commonly used ones, and one relatively new one, for 960 total experimental tests.  The reason for the 192 different data sets was to insure that I had not omitted any reasonably possible growth scenario, i.e., that no real tree ring data would likely have critical characteristics beyond the domain of those included in the simulations.  This was important, because I was attempting to show the limits of accuracy of the five existing analytical methods: if they couldn’t return accurate results from any of these simulated data situations, many of which were far tamer than anything expected with real data, then their likelihood of doing so with such real data would be very low, essentially zero.  It was therefore a strict and comprehensive test of the methods’ general validity.

There were endless hours (days and weeks, months actually) spent in making sure that I had accomplished this goal, covered every reasonably possible growth scenario, provided a thorough background discussion of the nature of the problem, described the methods very thoroughly, gotten the R code absolutely correct, produced graphs that captured the results and that were clear and easy to interpret, created results tables for every experiment, computed several different verification statisics for the calibrations, devised a set of diagnostic measures for the existence and nature of the documented issues and the results they produced, and etc etc.  It was wordsmithed to death.  I tried very hard to make sure I had gotten it right the first time, and that given the highly controversial nature of the findings and the contentiousness of the entire climate change issue, that I had the necessary evidence to back up my claims and presented it as well as I could.

One more note of importance.  The manuscript actually started out as a completely different work, in which I developed a new detrending method that resolves a number of the existing problems.  However, when that work was essentially done, I realized that it made very little sense to propose a new and better method when it wasn’t even clear to most people that some of the problems I was attempting to fix existed in the first place. You must clearly define/describe an existing problem before proposing a sophisticated fix for it.   It would take me one entire paper, at least, just to lay out exactly what these problems were, their magnitude, their cause, and so forth; I had to write that paper first.  However, science in general is not too keen on this approach; it doesn’t look very good frankly.  In this case, doing so involves admitting the failure to address–or even recognize–certain fundamental problems that have led to many papers over the years–including some very high profile ones–that range from questionable to entirely worthless.  This situation is just not acceptable, period.

The next post(s) will get into the details of the review itself.

4 thoughts on “Severe analytical problems in dendroclimatology, part nine: The PNAS review

  1. Have you thought of Earth Science Reviews. Yeah … I know it is Elsevier but I found them failr amenable in allowing me to put together a synthesis of vast amount of research material. It was the culmination of ten years research. At least the detailed information is now out there for others to use.

    And it is not easily ducked by other workers in the field.

    • Thanks K.A. I am opposed to Elsevier’s practices, especially their obstinance in the movement towards open science, but I will consider it. Fairness is a huge issue of course.

      As far as the “information being out there”–that’s exactly why I’ve written up the gist of my findings in this series. I just want people to understand the issues here. Do I want a pub in a high profile journal? Sure, what scientist doesn’t, and I most certainly don’t take kindly to PNAS messing with something that affects my career. But that isn’t what drives me in this thing.

      As for “ducking”, my impression is that people who want to duck something, will, no matter where it’s published. Look at Loehle’s 2009 paper for example. That thing is right on the money, clearly and simply written, in a well known journal, and it’s been avoided like the plague for the most part.

      Also, would definitely be interested in looking at your paper–could you give a ref?

Have at it