Rate process estimation: Poisson vs gamma

This post is about estimating a rate process, particularly when certain required input data are missing or critical assumptions are unmet. Although it’s quite unrelated to recent posts (here and here) I’m actually headed somewhere definite with the collection of them, if you can bear with me. If you can’t bear with me, I can bear that.

As we know, estimating rate processes is a very common task in science, where “rate” is defined generally as a measured change in one variable per unit measure of another, usually expressed as a ratio or fraction. In the discussion that follows I’ll use object density–the number of objects of interest per unit area–to illustrate the concepts, but they are applicable to rates of any type.

Obtaining rate estimates, with rare exceptions, requires empirical sampling over the domain of interest, followed by some mathematical operation on the samples, often an averaging. Sampling, however, can take two opposing approaches, as determined by the measurement units of the denominator and the numerator. Using our density example, in the first approach we tally the objects occurring within samples of some apriori defined area (the denominator thus fixed), and then simply average the values over all samples. In the second distance sampling (DS) approach, we instead measure to the nth closest objects from random starting points, with the value of n chosen apriori, and thus with the numerator fixed and the denominator varying. Low values of n are typically chosen for convenience, and distances are converted to density using a suitable density estimator.

This DS approach has definite advantages. For one, it is time efficient, as no plot delineations are required, and decisions on which objects to tally or which have already been so, are typically easier. A second advantage is that the total sample size is independent of object density, and a third is an often superior scatter of samples through the study area, leading to a better characterization of spatial pattern.

The data from the two approaches are modeled by two well-established statistical models, the Poisson and the gamma, respectively. With the Poisson, the number of objects falling in each of a collection of plots will follow a Poisson distribution, the exact values determined by the overall density. With the second approach, the distances to the nth closest objects follow a gamma distribution.

Either approach requires that objects be randomly located throughout the study area for unbiased estimates, but there is a second potential bias source with distance sampling (DS), and hence a down-side. This bias has magnitude of (n+1)/n, and so e.g., measuring to the n = 1st closest object will bias the density estimate by a factor of 2x, to the n = 2nd closest by 3/2, etc. This bias is easily removed simply by multiplying by the inverse, n/(n+1), or equivalently, measuring to the 2nd closest objects will give the area corresponding to unit density, that is, the area at which object density equals one. Which is kind of interesting, assuming you’ve made it this far and nothing more enthralling occupies your mind at the moment.

The shape of the gamma distribution varies with n. When n = 1, the gamma assumes a negative exponential shape, and when n > 1 it is unimodal, strongly skewed at low values of n, but of decreasing skew (more normal) at higher n. The upshot is that one can diagnose n if it is unknown, because these distributions will all differ, at least somewhat, especially in their higher moments, such as the variance and skew. However, discriminatory power decreases with increasing n, and the approach also assumes the objects occur spatially randomly, which they of course may in fact not.

If our rate denominator measurement space is defined on more than one dimension–as it is in our example (area, two dimensions)–we can subdivide it into equally sized sectors and measure distances to the nth closest object within each sector. Sectors here would equate to angles emanating from the sample point, and this “angle-order” method increases the sample size at each sample point. Supposing four sectors, for a given value of n, the four measurements at a point give (four choose two) = six possible ratios between the objects, these ratios being defined as the further distance over the closer. The distributions of these six values, over a collection of sample points, are then collectively discriminatory for n.

Usually both the rate and its homogeneity are unknown, and we need the latter to get the former. If a non-random pattern exists, the non-randomness can be quantified (in several ways) if we know n, and density estimates made using suitable corrections. If we don’t know n but do know that the objects are in fact randomly arranged, we can still infer density, although not with nearly as much precision as when we do know n. Interestingly, the bias arising from a clustered non-random pattern is of the same direction as that from an under-estimate of n, both leading to density under-estimates. Similarly, a tendency toward spatial pattern regularity gives a bias in the same direction as an overestimate of n.

These last facts can be useful, because our main interest is not n or the spatial pattern, but rather the rate (density), and the effects of uncertainties in them are not additive, and can even be compensating, depending on the situation. This serves to greatly constrain the total uncertainty in the rate estimate, even when these critical pieces of input information are lacking. I’m going to try to expound on that in the next post in this utterly enthralling series.

Support for this post has been provided by the Society for Public Education of Not Very Well Recognized or Cared About Issues. Nothing in the post should be construed, or mis-construed, as in any way necessarily reflecting the views, opinions, sentiments, thoughts, conceptual leanings, quasi-conscious daydreaming or water cooler banter of said Society, or really of anyone in particular other than me, and even that is open to debate. You may now return to your regularly scheduled life.

Advertisements

10 thoughts on “Rate process estimation: Poisson vs gamma

  1. Had me at “serves to greatly constrain the total uncertainty” – because a half-hearted attempt at constraining total uncertainty is not worth much. Might as well cruise down Hwy 61 in a rainstorm with the windows down.

    We soybean guys tend to be denominator fixers. I know, boring. At least my dues for SfPENVWRoCAI are paid up.

    • Clem, I’m pretty sure you increased the readership of this post by an infinite percentage, which is to say, 1/0. So I’m happy, but I like the Hwy 61 suggestion nevertheless and may give it a whirl. And I’ll try to better explain what I was getting at though in the next post.

      You soybean guys are just flat out nuts, fixing nitrogen when most of us didn’t even know it was broken.

  2. Yeah, fixing N that isn’t broken. Will have to keep that in mind. 🙂

    For what its worth there have been a handful of papers published recently looking at endophytes fixing N for grasses like rice or for non-leguminous dicots like canola. The rice paper I’m thinking of is still sitting on top of the desk (Crop Sci 55:1765-1772 (2015)). Kandel et al. … and how can one not like a paper whose title starts with ‘Diazotrophic’??

    • That kind of research is so important. And all of the work involved in making perennial versions of annual crops. I bet we see perennial, N-fixing corn in our lifetimes. And it’s C-4. The ultimate super-crop!

    • Wow, perennial N-fixing corn in our lifetimes… I like your optimism. But unless you’re a whole lot younger than me (or a magic cure for aging comes along) I seriously doubt these transitions will happen while I have a chance to witness.

      Perennial versions of some annual crops may happen within the time I have left, but I’m skeptical whether they’ll compete with the annual versions commercially (but this is certainly NOT a reason to give up trying). As for N-fixing corn, this current research using endophytes does put the whole idea in a different light. If one had to build a corn system that imitates legumes then that future would be very far off. But wit the endophytes there might be something out there within a lifetime.

    • Ha ha, “I like your optimism”. Methinks this is code for “Bouldin, you’re nuts”. Which of course is true.

      Then again, ten years ago we’d have scoffed at the sequencing of the Emmer wheat genome in just one month. But I agree, perenniality–well that’s no easy nut to crack, nor is creating N fixation. But I’m always amazed at what breeders and biotech folks accomplish. Also, forgot to mention that I’m planning on living to say, 125 or so.

  3. Tangentially off topic (you’ve been warned) – but once I saw this I had to know if 1 you’d seen it yet, and 2 if you think it potentially as valuable as I do.

    Mapping tree density at a global scale, Nature 2015… the url is pretty messy, but simply Googling the title will get you there. [or you might try this: http://www.nature.com/articles/nature14967.epdf I pruned some of the detritus off the line in my browser and the pruned version works for me]…

    Anyway, the paper itself looks somewhat interesting. Of additional interesting for me, this is the first time I’ve clicked on a NPG article and had their new “content sharing initiative” pop up. Still not open access, but it is an interesting move on their part. Its a one year trial, and I’m sure more intelligent folk than I will weigh in… but I can only hope they like what happens and move toward more open sharing of scientific content.

    3+ trillion trees on our humble little planet is an interesting count. Should be enough there to keep some folks busy for a while 🙂

    • Clem thanks for that link–I was unaware of that paper. Will grab it later and have a read. Definitely interested in what methods they used, especially in terms of making the estimate of the tree loss due to humans. I’m also not familiar with the content sharing thing you mention, but I’m always in favor of open access anything.

  4. Hey, this is my regularly scheduled life (student in a landscape ecology lab)! I’ve used angle-order estimates before (quick-and-dirty veg protocol for a bird project), and had never been aware of its sensitivity to spatial pattern. My current project is using fixed-radius plots to measure spatial patterning of trees (as a point process), and I’m glad of that choice (compared to some other sampling scheme) after having read this post!

    Do you know, has anyone looked at how the DS method is biased by spatial pattern depending on the severity and scale of that pattern (e.g., for different values of Ripley’s K(t))? I imagine that if, say, tree stems are aggregated at a spatial scale greater than your sampling (e.g. random at scales t10m, and your nearest trees are always <10m away), you might get away with not accounting for spatial pattern using the DS method? I suppose, though, that you’ll eventually end up at random points located in a patch with trees >10m away, and see a large variance in your density estimates. Good food for thought!

    • Hi S, thanks for the interesting comment (the first comment by anyone always gets held for approval).

      Yes I don’t think there’s any question that knowing the exact locations of objects is the best way to go for precise info on spatial pattern. When I included, among the DS advantages, a better understanding of spatial pattern, I had in mind only that situation in which the time/resource costs of sampling cause one to place just a very few (but typically large-ish) plots in their area of interest, trading off extensive knowledge over the whole area for very intensive knowledge at just a few locations (sometimes just one). I know of a number of forest demography studies that follow the latter approach. However, DS is clearly not designed for spatial pattern analysis, generally speaking, but rather for density estimation, so the advantage I have in mind should not be a major consideration when choosing a spatial pattern analysis method.

      Yes there are indeed such studies as you mention. The best one to me (I really like it) is Engemann et al’s 1994 study: A comparison of plotless density estimators using Monte Carlo simulation (free copy here). I don’t believe they used Ripley’s K to quantify aggregation intensity, but they definitely did test different types of clustering intensity and pattern, and also several non-angle order distance sampling methods, like variable area transects and some others.

      You are exactly right about getting away with not accounting for spatial pattern if the scale of aggregation is > the scale of most point to object distances, but only if you use the right estimator, specifically that of Morisita (1957). This was a central point of mine in a paper from a few years ago Some problems and solutions in density estimation from bearing tree data: a review and synthesis (free copy here). Note that Engemann et al use Morisita’s estimator to compute density for each of the several angle-order methods they test–many studies have used various biased estimators in the past, especially that of Cottam and Curtis (1956). It uses the mean of the squared distances to trees over all sample points to compute density, which is an absolute no-no when objects are aggregated.

      And yes, the variance in distances definitely goes up as the intensity of the aggregation increases. You can imagine a highly aggregated situation where all trees occur in clearly definable clumps. Many of your random points are going to be a long ways from the closest tree in that situation, with the occassional very short distance when a point falls within a clump. Since density is an inverse function of squared distance, these long distances drive the density estimate well down below actual.

Have at it

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s