Natural selection, genetic fitness and the role of math–part two

I’ve been thinking some more about this issue–the idea that selection should tend to favor those genotypes with the smallest temporal variations in fitness, for a given mean fitness value (above 1.00). It’s taken some time to work through this and get a grip on what’s going on and some additional points have emerged.

The first point is that although I surely don’t know the entire history, the idea appears to be strictly mathematically derived, from modeling: theoretical. At least, that’s how it appears from the several descriptions that I’ve read, including Orr’s, and this one. These all discuss mathematics–geometric and arithmetic means, absolute and relative fitness, etc., making no mention of any empirical origins.

The reason should be evident from Orr’s experimental description, in which he sets up ultra-simplified conditions in which the several other important factors that can alter genotype frequencies over generations, are made unvarying. The point is that in a real world experimental test you would also have to control for these things, either experimentally or statistically, and that would not be easy. It’s hard to see why anybody would go to such trouble if the theory weren’t there to suggest the possibility in the first place. There is much more to say on the issue of empirical evidence. Given that it’s an accepted idea, and that testing it as the generalization it claims to be is difficult, then the theoretical foundation had better be very solid. Well, I can readily conceive of two strictly theoretically-based reasons of why the idea might well be suspect. For time’s sake, I’ll focus on just one of those here.

The underlying basis of the argument is that, if a growth rate (interest rate, absolute fitness, whatever) is perfectly constant over time, the product of the series gives the total change at the final time point, but if it is made non-constant, by varying it around that rate, then the final value–and thus the geometric mean–will decline. The larger the variance around the point, the greater the decline. For example, suppose a 2% increase of quantity A(0) per unit time interval (g), that is, F = 1.020. Measuring time in generations here, after g = 35 generations, A(35) = F^g = 1.020^35 = 2.0; A is doubled in 35 generations. The geometric (and arithmetic) mean over the 35 years is 1.020, because all the yearly rates are identical. Now cause F to instead vary around 1.02 by setting it as the mean of a normal distribution with some arbitrarily chosen standard deviation, say 0.2. The geometric mean of the series will then drop (on average, asymptotically) to just below 1.0 (~ 0.9993). Since the geometric mean is what matters, genotype A will then not increase at all–it will instead stay about the same.

pstep = 0.00001; probs = seq(pstep, 1-pstep, pstep)
q = qnorm(p=probs, mean=1.02, sd=0.2)
gm = exp(mean((log(q)))); gm

This is a very informative result. Using and extending it, now imagine an idealized population with two genotypes, A and B, in a temporally unvarying selection environment, with equal starting frequencies, A = B = 0.50. Since the environment doesn’t vary, there is no selection on either, that is F.A = F.B = 1.0 and they will thus maintain equal relative frequencies over time. Now impose a varying selection environment where sometimes conditions favor survival of A, other times B. We would then repeat the above exercise, except that now the mean of the distribution we construct is 1.000, not 1.020. The resulting geometric mean fitness of each genotype is now 0.9788 (just replace 1.02 with 1.00 in the above code).

So what’s going to happen? Extinction, that’s what. After 35 generations, each will be down to 0.9788^35 = 0.473 of it’s starting value, on average, and on the way to zero. The generalization is that any population having genotypes of ~ equal arithmetic mean (absolute) fitness and normally distributed values around that mean, will have all genotypes driven to extinction, and at a rate proportional to the magnitude of the variance. If instead, one genotype has an arithmetic mean fitness above 1.00 a threshold value determined by it’s mean and variance, while all others are below it, then the former will be driven to fixation and the latter to extinction. These results are not tenable–this is decidedly not what we see in nature. We instead see lots of genetic variation, including vast amounts maintained over vast expanses of time. I grant that this is a fairly rough and crude test of the idea, but not an unreasonable one. Note that this also points up the potentially serious problem caused by using relative, instead of absolute, fitness, but I won’t get into that now.

Extinction of course happens in nature all the time, but what we observe in nature is the result of successful selection–populations and species that survived. We know, without question, that environments vary–wildly, any and all aspects thereof, at all scales, often. And we also know without question that selection certainly can and does filter out the most fit genotypes in those environments. Those processes are all operating but we don’t observe a world in which alleles are either eliminated or fixed. The above examples cannot be accurate mathematical descriptions of a surviving species’ variation in fitness over time–something’s wrong.

The “something wrong” is the designation of normally distributed variation, or more exactly, symmetrically distributed variation. To keep a geometric mean from departing from it’s no-variance value, one must skew the distribution around the mean value, such that values above it (x) are inverses (1/x) (mean/x) of those below it–that is the only way to create a stable geometric mean while varying the individual values. [EDIT: more accurately, the mean must equal the product of the values below the mean, multiplied by the mean divided by the product of the values above the mean, but the values will be skewed in any case.] Mathematically, the way to do so is to work with the logarithms of the original values–the log of the geometric mean is designated as the mean of normally distributed logarithms of the individual values, of whatever size variance one wants. Exponentiation of the sum of the logarithms will equal the product of the fitness series.

Hopefully, what I’m driving at is emerging. If the variance structure must obey this mathematical necessity to preserve a genotype’s mean fitness at 1.00, while still allowing the individual series values to vary…then why should we not expect the same to hold true when the mean geometric fitness is not equal to 1.00? I would argue that that’s exactly what we should expect, and that Gillespie’s original arguments–and Orr’s, and others’ summaries thereof–are not particularly defensible theoretical expectations of what is likely to be happening in nature. Specifically, the idea that the variance in fitness around an arithmetic mean should necessarily arise from symmetrically (normally) distributed values, is questionable.

As alluded to above, there is (at least) a second theoretical argument as well, but I don’t have time to get into it now (nor for this one for that matter). Suffice it to say that it involves simultaneous temporal changes in total population size and selective environments. All this without even broaching the entire hornet’s nest of empirically testing the idea, a topic reviewed five years ago by Simons. For starters, it’s not clear to me just how conservative “bet hedging” could ever be distinguished from the effects of phenotypic plasticity.

References

Simons, A.M. (2011) Modes of response to environmental change and the elusive empirical evidence for bet hedging. doi:10.1098/rspb.2011.0176

Other references are linked to in the previous post.

On natural selection, genetic fitness and the role of math

I really am not quite sure what to make of this one.

Last week at the blog Dynamic Ecology it was argued that natural selection behaves like a “risk-averse” money investor. That is, assuming that fitness varies over time (due to e.g. changing environmental variables or other selective factors), natural selection favors situations in which the mean fitness is maximized while the variance is minimized. The idea is explained in this short paper by Orr (2007), whose goal was to explain previous findings (Gillespie, 1973) intuitively. This presumes that knowledge of investor behavior is commonplace, but for my money, an examination of the math details and assumptions is what’s really needed.

This conclusion seems entirely problematic to me.

Continue reading

Outcome probabilities

Continuing from the previous post., where I discussed earth’s recent surface temperature increase hiatus/slowdown/backoff/vacation

Well not really–I discussed instead the very closely related topic of enumerating the outcomes of a given probabilistic process. And actually not so much a discussion as a monologue. But couldn’t somebody please address that other issue, it’s just been badly neglected… πŸ™‚

Anyway, enumerating the possible allocations of n objects into q groups is rarely an end in itself; the probability, p of each is usually what we want. This is a multinomial probability (MP) problem when q > 2, and binomial (BP) when q = 2, in which we know apriori the per-trial p values and want to determine probabilities of the various possible outcomes over some number, n, of such trials. In the given example, the per-trial probabilities of group membership are all equal (1/6) and we want to know the probability of each possible result from n = 15 trials.

One has to be careful in defining exactly what “trials” and “sample sizes” constitute in these models though, because the number of trials can be nested. We could for example, conduct n2 = 100 higher level trials, in each of which the results from n1 = 2 lower level trials are combined. This is best exemplified by Hardy-Weinberg analysis in population genetics; a lower level trial consists of randomly choosing n1 = 2 alleles from the population and combining them into a genotype. This is repeated n2 times and the expected genotype frequencies, under certain assumption of population behavior, are then compared to the observed, to test whether the assumptions are likely met or not. If only two possible outcomes of a single trial (alleles in this case) exist in the population, the model is binomial, and if more than two, multinomial.

There are two types of MP/BP models, corresponding to whether or not group identity matters. When it does, the BP/MP coefficients determine the expected probabilities of each specific outcome. For n objects, q groups and group sizes a through f, these equal the number of permutations, as given by n! / (a!b!c!d!e!f!), where “!” is the factorial operator and 0! = 1 by definition. This formula is extremely useful; without it we’d have to enumerate all permutations of a given BP/MP process. And this would choke us quickly, as such values become astronomical in a hurry: with n = 15 and q = 6, we already have 6^15 = 470 billion possible permutations.

When group identity doesn’t matter, only the numerical distribution matters, and this will decrease the total number of coefficients but increase the value of each of them. For example, in locating the n = 50 closest trees to a random sampling point, one may want to know only the expected numerical distribution across the q angle sectors around the point. In that case, the allocation [2,1,1,0] into groups [a,b,c,d] would be identical to [0,1,1,2] and to 10 others, and these thus have to be aggregated. The number of aggregations is given by the number of permutations of the observed group sizes, which in turn depends on their variation. When all differ, e.g. [0,1,2,3,4,5] for n = 15, the number of equivalent outcomes is maximized, equaling q + (q-1) + (q-2)…+ 1 (in this case, 21) [Edit: oops, confused that with what follows; the number of permutations there is given by q!]. When some but not all of the group sizes are identical it’s more complex, as determined by products of factorials and permutations. When the number of identical group sizes is maximized, the equivalent outcomes are minimized, always at either q-1 or 1. In this case there are 6 variations of [n,0,0,0,0,0].

To get the desired probability density function, the raw MP/BP coefficients, obtained by either method, are standardized to sum to 1.0.

Next time I’m going to discuss a general solution to the problem of estimating the otherwise unknown relationships between pairs of objects involved in a rate process, such as density, the number of objects per unit area or time. These can be deduced analytically using multinomial and gamma probability models in conjunction.

It will make you call your neighbors over for a party and group discussion. If it doesn’t you get a full refund.

Traveling scientists, traveling birds, traveling trees

Here are links to some interesting looking articles I heard about today. Maybe Twitter is useful after all.

1. Lee Barrett Russell Garwood argues at Nature that uprooting researchers can drive them out of science.

2. Hung et al. have a new paper in PNAS arguing that extinction of the passenger pigeon (Ectopistes migratorius) was due not only to extreme over-hunting but also possibly to population fluctuations inherent in the species, as driven primarily by acorn supply. But as is so common now, the article title (“Drastic population fluctuations explain the rapid extinction of the passenger pigeon“) does not follow the overall message of the article. The nosedive to extinction was surely a “drastic population fluctuation”, but clearly market hunting was an enormous factor in that, no matter if they have indeed found good evidence for an effect of natural population variation.

When I get the paper I’ll read their full argument, but I note here that it appears to be based on historic population fluctuations inferred from the genomic analysis of just four museum specimens, which is surely a red flag. I note also that although the PNAS article page says that the protein sequences analyzed have been deposited at NCBI, the link given returns a message saying “The requested page does not exist“. I find mainly mitochondrial nucleotide sequences for passenger pigeons there, one of which has been pulled by the original contributors. The supplemental material is available here.

GrrlScientist has an article on the paper at The Guardian, which is how I heard about it, and which includes this terrific watercolor:Ectopistes

Update:
3. Dan Kahan, who I find to be a perceptive, non-extreme sort of fellow, has a three part series (starting here) at Cultural Cognition on just what a consensus in science is really all about, and how it relates to what he terms internal and external validity (which roughly correspond to verification and validation in modeling). I haven’t read it yet, but it looks like he’s put a lot of thought into the issue, more than +/- anyone, so I surely will.

4. Lastly there’s this interesting looking study regarding very long distance dispersal of an Acacia species between the Hawaiian and Reunion Islands. But not by floating–the seeds won’t germinate after exposure to salt water–had to be via some other route, mostly likely avian.

Aristotle on natural selection

As the teeth, for example, grow by necessity, the front ones sharp, adapted for dividing, and the grinders flat, and serviceable for masticating the food; since they were not made for the sake of this, but it was the result of accident. And in like manner as to the other parts in which there appears to exist an adaptation to an end. Wheresoever, therefore, all things together (that is, all the parts of one whole) happened like as if they were made for the sake of something, these were preserved, having been appropriately constituted by an internal spontaneity; and whatever things were not thus constituted, perished, and still perish.

Arisotle, Physicae Auscultationes, as quoted in: An Historical Sketch of the Progress of Opinion on The Origin of Species; in: Darwin, C. (1909) The Origin of Species, Collier Press.

Dobzhansky to Mayr, 1935

Dear Dr. Mayr:
Many thanks for your kind letter of Nov. 7th, which is so highly flattering to me, and to which I certainly want to reply.

The need for a reconciliation of the views of taxonomists and geneticists I feel very keenly, but it seems to me that all what is to be reconciled are just the viewpoints, since I do not perceive any contradictions between the facts secured in the respective fields. Of course, this is a big β€œjust”. So far, geneticists appear to think that they need not pay any attention to what taxonomists are doing, and vice versa. To my mind this is the root of the trouble. Probably no less than 75% of geneticists still believe that there is nothing in particular to be gained from studies on the races of wild animals as compared with races in bottles. You and myself will probably have no disagreement as to the absurdity of this view.

Sincerely yours, Th. Dobzhansky.

From: Haffer, J. 2007. Ornithology, Evolution, and Philosophy; The Life and Science of Ernst Mayr, 1904–2005, (p. 187). Springer.

Mayr 1960Ernst Mayr in 1960 at Harvard. Image from the book.

Hardy-Weinberg genetic equilibrium and species composition of the American pre-settlement forest landscape

This post is about how binomial probability models can, and cannot, be applied for inference in a couple of very unrelated biological contexts. The issue has (once again) made popular media headlines recently, been the focus of talk shows, etc., and so I thought it would be a good time to join in the discussions. We should, after all, always focus our attention wherever other large masses of people have focused theirs, particularly on the internet. No need to depart from the herd.

Binomial models give the coefficients/probabilities for all possible outcomes when repeated trials are performed of an event that has two possible outcomes that occur with known probabilities. The classic example is flipping a coin; each flip has two possible outcomes of h = t = 0.5, and if you flip it, say twice (two trials), you get 1:2:1 as binomial coefficients for the three possible outcomes of (1) hh = two heads, (2) ht = one head and one tail, or (3) tt = two tails, which gives corresponding probabilities of {hh, ht, tt} = {0.25, 0.50, 0.25}. These probabilities are given by the three terms of (h + t)^2, where the exponent 2 gives the number of trials. The number of possible outcomes after all trials is always one greater than the number of trials, with the order of the outcomes being irrelevant. Simple and easy to understand. The direct extension of this concept is found in multinomial models, in which more than two possible outcomes for each trial exist; the concept is identical, there are just more total probabilities to compute. Throwing a pair of dice would be a classic example.

The most well-known application of binomial probability in biology is probably Hardy-Weinberg equilibrium (HWeq) analysis in population genetics, due to the fact that chromosome segregation (in diploids) always gives a dichotomous result, each chromosome of each pair having an equal probability of occurrence in the haploid gametes. The binomial coefficients then apply to the expected gamete combinations (i.e. genotypes) in the diploid offspring, under conditions of random mating, no selection acting on the gene (and on closely linked genes), and no migration in or out of the defined population.

Continue reading

Phenotypic plasticity and climate adaptation; ecology vs natural history

For those interested in the potentially very important issue of biological adaptation to climate change, you will definitely want to check out the latest issue of Evolutionary Applications, a special issue addressing climate change, adaptation and phenotypic plasticity, all articles open. I’ve not yet been able to read any of the articles, but it looks really good from first glance and I’m certain I will learn a lot from it.

That second phrase there is the topic of Jeremy Fox’s latest post at Dynamic Ecology, and he’s outdone even himself this time; go see, once again, how good blog articles and their discussions can be when the effort is made. I wish I had time to respond to it with anything more than the couple of sentences I stated there, but I do not–whatever extra time I have is devoted to just reading (including the comments) and thinking about it. And those discussions over there give you a lot to think about.

Open discussion

Matt Skaggs had the audacity to ask, and follow up on, a paleobotany/evolution question involving the California flora, which I really should know more about than I do, instead of talking about stupid stuff like football. You can add to that discussion of course (copied below), one that ranges taxonomically from cypresses to sequoias to serpentine-tolerant mustards, and conceptually from Darwin to Wright to Goldschmidt, but I figured I’d better make a post allowing for questions/discussions on miscellaneous topics of interest, for putting up links to interesting articles, and so forth. The abrupt transition from bluegrass song lyrics to the evolutionary origins of serpentine endemics in California is well known to mess with peoples’ senses of flow and thus probably should be avoided. [Editor’s note: the term “bluegrass”, as used above, does not refer to Poa pratensis except in a very historically indirect way–apologies for any taxonomic confusion.]

In other thrilling news, I messed around with some other WordPress “Theme Options” to see if I could get the content and comments to span the screen better, but had no luck and figured I’d better leave well enough alone before I broke something. I was able to force replies to comments to nest to a max of one indentation level to aid the cause however.

Below is copied the discussion by Matt and I, for your literary pleasure. Note that this will likely bring a barrage of comments, so try to get yours in early.

Continue reading

Reverend Bayes takes Sir Ronald to the mat! Wait, hold on..is that Father Mendel with the take-down?!

In trying to wrap my limited head around apparently unlimited Bayesian statistical practice, Steve Walker pointed me to this article by Andrew Gelman and David Weakliem. The authors critique a study (published in the Journal of Theoretical Biology*), in which it was claimed that highly “attractive” (physically) people have a skewed gender ratio in their children, to the tune of somewhere between 1.05:1 and 1.26:1, girls:boys, depending on how you compute the ratio, based on a sample from about 3000 couples.

Well that’s eye-catching, given that we know that chromosomes in diploids, including the X and Y (gender) chromosomes in humans, typically segregate 1:1 during meiosis. We also know that if you take any large sample of humans, you will get very close to a 1:1 female:male ratio of offspring. The results were interesting enough for Psychology Today to publicized the study, for whatever reason. I mean after all, it’s in J. Theoretical Biology, so it must be valid, presumably with a solid “theoretical” basis, right?

Negative, as Gelman and Weakliem explain.
Continue reading