Severe analytical problems in dendroclimatology, part fifteen

I’m going to give this topic another explanatory shot with some different graphics, because many still don’t grasp the serious problems inherent in trying to signal and noise from tree ring size. The most advanced method for attempting this is called Regional Curve Standardization, or RCS, in which ring size is averaged over a set of sampled trees, according to the rings’ biological age (i.e. ring number, counting from tree center), and then dividing each individual series by this average. I include five time series graphs, successively containing more information, to try to illustrate the problem. I don’t know that I can make it any clearer than this.

First, shown below are the hypothetical series of 11 trees sampled at a single sampling location.

Each black line shows the annual ring area progression for each of 11 trees having origin dates spaced exactly 10 years apart (the bold line is just the oldest tree of the group). By using ring area as the metric we automatically remove part of the non-climatic trend, which is the purely geometric (inverse quadratic) effect from each series. Any remaining variation is then entirely biological and it exhibits a very standard tree growth pattern, one in which growth rate increases to a maximum value reached relatively early life (here, around age 80 or so) and then declines more slowly toward a stable asymptote, which I fix at 1.0. Each tree’s trajectory occurs in a constant climate over the 300-400 year period measured.

The next figure adds two components:

First, the blue line represents a constantly increasing climatic parameter over time, say temperature, expressed as a ratio of its effect on ring size at year 0. Thus, at year 400, the cumulative climatic effect on ring area, regardless of biological age, is exactly 3-fold of its year zero value (scale at right). The second addition is the series of red lines, which simply represent those same 11 trees’ growth trajectories growing under this climate trend. The climatic effect on growth is a super simple linear ramp in all cases–I am not invoking any kind of problematic, complex growth response (e.g. “divergence”), or any other complication. Thus, by definition, if we divide the two corresponding ring series for each tree, we get exactly the blue line, in all cases.

In the third figure:

I add a green line–this is the estimated RCS curve, computed the standard way (by aligning each tree according to its biological age and then averaging the ring sizes over all trees). This RCS curve is thus the estimated non-climatic ring size variation, which we accordingly remove from each tree by dividing the red growth series by it. Finally, we average the resulting 11 index series, over each of the 400 years, giving the stated goal: the estimated climatic time series.

It is at first glance entirely clear that the green RCS curve does not even come close to matching any of the black curves representing the true non-climatic variation…which it must. According to standard dendroclimatological practice we would now divide the 11 red curves by this green RCS curve–which is thereby guaranteed not to return the true climatic signal. So what will it return?

It returns the orange line shown above. No that’s not a mistake: it will return an estimated climatic trend of zero.

And this is the entire point–the supposedly most advanced tree ring detrending method is fully incapable of returning the real climatic trend when one exists. Note that I’m keeping everything very simple here–this result does not depend on: (1) either the direction or magnitude of the true trend, or (2) the magnitude, or shape, of the non-climatic trend in the sampled trees (including no such whatsoever). That is, this type or magnitude of result is not specific to the situation I set up. The problem can be reduced, but never eliminated, by increasing the variance in tree ages in the sample. But since standard field sampling practice is to sample the oldest possible trees at a site, this is very rare, a fact which the data of the International Tree Ring Database (ITRDB) shows clearly–which is ironic given that Keith Briffa and Ed Cook mentioned the importance of exactly this issue in a white paper available at the ITRDB site.

Lastly, suppose now that the last usable year for all ring series occurred a few decades ago. This will occur, for example, due to many ITRDB field samples being collected decades ago now, or for any perceived problems in the climate-to-ring response calibration function, which is must be stable and dependable (notably, the “divergence” effect, in which linear relationships between climate and ring size break down, badly). What will be the result of eliminating, say, the last five decades of data, and replace them with instrumental data? Well, you will then get exactly this:

Look familiar? Does that look like anything remotely approaching success to you? Again, I have not even broached other possibly confounding problems, such as co-varying growth determinants (e.g. increasing CO2- or N-fertilization, changing soil moistures, or inter-tree competition), nor non-linear responses in the calibration function, nor any of the thorny issues in large-scale sampling strategies, reconstructions and their corresponding data analysis methods. Those things would all exacerbate the problem, not improve it. It’s a total analytical mess–beginning and end of story.

I can’t make it any clearer than this. And yes I have the R code that generated these data if you want to see it.

Advertisements

How not to do it

This is a long post. It analyzes a paper that recently appeared in Nature. It’s not highly technical but does get into some important analytical subtleties. I often don’t know where to start (or stop) with the critiques of science papers, or what good it will do anyway. But nobody ever really knows what good any given action will do, so here goes. The study topic involves climate change, but climate change is not the focus of either the study or this post. The issues are, rather, mainly ecological and statistical, set in a climate change situation. The study illustrates some serious, and diverse problems.

Before I get to it, a few points:

  1. The job of scientists, and science publishers, is to advance knowledge in a field
  2. The highest profile journals cover the widest range of topics. This gives them the largest and most varied readerships, and accordingly, the greatest responsibilities for getting things right, and for publishing things of the highest importance
  3. I criticize things because of the enormous deficit of critical commentary from scientists on published material, and the failures of peer review. The degree to which the scientific enterprise as a whole just ignores this issue is a very serious indictment upon it
  4. I do it here because I’ve already been down the road–twice in two high profile journals–of doing it through journals’ established procedures (i.e. the peer-reviewed “comment”); the investment of time and energy, given the returns, is just not worth it. I’m not wasting any more of my already limited time and energy playing by rules that don’t appear to me designed to actually resolve serious problems. Life, in the end, boils down to determining who you can and cannot trust and acting accordingly

For those without access to the paper, here are the basics. It’s a transplant study, in which perennial plants are transplanted into new environments to see how they’ll perform. Such studies have, at least, a 100 year history, dating to genetic studies by Bateson, the Carnegie Institute, and others. In this case, the authors focused on four forbs (broad leaved, non-woody plants), occurring in mid-elevation mountain meadows in the Swiss Alps. They wanted to explore the effects of new plant community compositions and T change, alone and together, on three fitness indicators: survival rate, biomass, and fraction flowering. They attempted to simulate having either (1) entire plant communities, or (2) just the four target species, experience sudden temperature (T) increases, by moving them downslope 600 meters. [Of course, a real T change in a montane environment would move responsive taxa up slope, not down.] More specifically, they wanted to know whether competition with new plant taxa–in a new community assemblage–would make any observed effects of T increases worse, relative to those experienced under competition with species they currently co-occur with.

Their Figure 1 illustrates the strategy:

Figure 1: Scenarios for the competition experienced by a focal alpine plant following climate warming. If the focal plant species (green) fails to migrate, it competes either with its current community (yellow) that also fails to migrate (scenario 1) or, at the other extreme, with a novel community (orange) that has migrated upwards from lower elevation (scenario 2). If the focal species migrates upwards to track climate, it competes either with its current community that has also migrated (scenario 3) or, at the other extreme, with a novel community (blue) that has persisted (scenario 4).

Figure 1: Scenarios for the competition experienced by a focal alpine plant following climate warming.
If the focal plant species (green) fails to migrate, it competes either with its current community (yellow) that also fails to migrate (scenario 1) or, at the other extreme, with a novel community (orange) that has migrated upwards from lower elevation (scenario 2). If the focal species migrates upwards to track climate, it competes either with its current community that has also migrated (scenario 3) or, at the other extreme, with a novel community (blue) that has persisted (scenario 4).

Continue reading

Another bad paper in Nature

Increasing CO2 threatens human nutrition” boldly proclaims a new paper from last week in Nature by Samuel Myers of Harvard and 19 co-authors (press release here). I don’t have time for this, but something needs to be said and if I don’t do it, I betcha nobody will.

Papers can be bad for various reasons, obviously. Logically enough, this is most often due to poor methodology. But it can also be because the science is more or less OK, as far as it goes, but the importance of the principal claim(s) of the paper does/do not really follow from the study’s actual findings. By “principal claim” I mean the one or two main points most emphasized, perhaps in the concluding paragraph, or the abstract, or even just the title itself, like in this case. This paper has both problems, but especially the latter.

Nature publishes a lot of bad science frankly, but I’m surprised more than normally at the audacity of this one. This thing shouldn’t have been published in Nature in the first place. I get the strong impression that Nature and other glamour journals are counting on people just reading the title and skimming the paper, without any real critical evaluation. Why? Because if you have any basic sense of the issues and read this paper, there’s no way you can accept that title as stated based on the study performed, even if the paper’s methodology were entirely sound, which it is not. Who exactly do they think is reading these papers, and indeed, who is reading them? Not skimming, I mean really reading closely. Well we have no idea, because critical, in-depth commentary on papers is rare indeed.

Continue reading