Rough and Ready

In the fall of 1849, the “Rough and Ready Company” of emigrants, under Captain Townsend, composed of some dozen men, from Shellsburg, Wisconsin, arrived by the Truckee route at a point on Deer Creek near the mouth of Slate Creek; they mined successfully there, a few weeks in the bed of the creek; one of their number went out to kill some game, deer and grizzly being plentiful, and in quenching his thirst at the clear stream of the ravine below Randolph Flat, discovered a piece of gold on the naked bed-rock. Consequent prospecting by the company satisfied them that the new found diggings were rich, and removing their camp, they prepared winter quarters by building two log cabins on the point of the hill east from and overlooking the present town of Rough and Ready. Two of their number struck out through the woods “on a bee line” for Sacramento, to procure provisions, and thus made the first wagon tracks on what afterward became the Telegraph road. From the name of this company, the settlement and town afterward derived its designation…

Continue reading

Behind the face of need

A man conceived a moment’s answers to the dream
Staying the flowers, daily sensing all the themes
As a foundation left to create the spiral aim
All movement regained and regarded both the same
All complete in the sight of seeds of life with you

Changed only for a sight, the sound, the space, agreed
Between the picture of time, behind the face of need
Coming quickly to terms of all expression laid
Emotion revealed is the ocean maid
All complete in the sight of seeds of life with you

 

close-to-the-edge-inner

 

 

 

 

 

Sad preacher nailed upon the color-door of time
Insane teacher be there, reminded of the rhyme
There’ll be no mutant enemy we shall certify
Political ends, as sad remains, will die
Reach out as forward tastes begin to enter you

I listened hard but could not see
Life tempo change out- and inside me
The preacher trained in all to lose his name
The teacher travels, asking to be shown the same
In the end we’ll agree, we’ll accept, we’ll immortalize
The truth of the man maturing in his eyes
All complete in the sight of seeds of life with you

And you and I climb, crossing the shapes of the morning
And you and I reach over the sun for the river
And you and I climb clearer towards the movement
And you and I crawl over valleys of endless seas

And You And I, Jon Anderson, Yes

close-to-the-edge-cover

When it happens to you

Well she was old enough, to know better
And she was strong enough, to be true
And she was hard enough, to know whether
He was smart enough, to know what to do

And you can’t resist it
When it happens to you
No you can’t resist it
When it happens to you

And you can tell your stories
And you can swear it’s true
But you can save your lies
For some other fool

And you can’t resist it
When it happens to you
No you can’t resist it
When it happens to you

You can’t resist it, Lyle Lovett (with Leo Kottke)

And if that doesn’t do it for you, this should:

Find it on your own

Say goodbye, you know it’s true
I know you’re leavin’ me–I’m leavin’ too
You won’t forget me, or the sound of my name
Please believe, I feel the same

It seems so empty now–you’ve closed the door
Ain’t it hard to believe you ever lived this way before?
All that nothin’… causes all that pain
Please believe, I feel the same

Broken soul, the heart it’s breakin’
Can’t make it whole ’til you know what’s been taken
All those pieces–find them on your own
All those pieces–find them on your own

I Feel The Same, Chris Smither

I am the ride

I awoke and someone spoke–they asked me in a whisper
If all my dreams and visions had been answered
I don’t know what to say–I never even pray
I just feel the pulse of universal dancers
They’ll waltz me till I die and never tell me why–
I’ve never stopped to ask them where we’re going
But the holy, the profane, they’re all helplessly insane
Wishful, hopeful, never really knowing

They asked if I believe, and do the angels really breathe?
Or is it all a comforting invention?
It’s just like gravity I said–it’s not a product of my head
It doesn’t speak but nonetheless commands attention
I don’t care what it means, or who decorates the scenes
The problem is more with my sense of pride
It keeps me thinking me, instead of what it means to be
But I’m not a passenger, I am the ride
I’m not a passenger, I am the ride

I Am The Ride, Chris Smither

Nobody knows

Nobody knows about what’s going on
With the wood and the steel, the flesh and the bone
The river keeps flowing and the grass still grows
And the spirit keeps going, but nobody knows

Poets they come and the poets they go
Politicians and preachers–they all claim to know
Words that are written and the melodies played
As the years turn their pages, they all start to fade

The ocean still moves with the moon in the sky
The grass still grows on the hillside
Got to believe in believin’
Got to believe in a dream
Freedom is ever deceiving
Never turning out to be what it seems

It’s amazing how fast our lives go by
Like a flash of lightning, like the blink of an eye
We all fall in love as we fall into life
We look for the truth on the edge of the night
Heavens turn ’round and the river still flows
How the spirit keeps going, nobody knows

Nobody Knows, Gregg Allman, Allman Brothers
(Chords here)

What’s his name again?

Yeah, it happens when the money comes:
The wild and poor get pushed aside
It happens when the money comes

Buyers come from out of state
They raise the rent and you can’t buy
Buyers come from out of state and raise the rent

“Buy low, sell high, you get rich!”
You still die
Money talks and people jump
Ask “How high?”
Low-life Donald…what’s-his-name?

And who cares?
I don’t want to know what his wife
Does or doesn’t wear
It’s a shame the people at work
Want to hear about this kind of jerk

I walk where the bottles break
And the blacktop comes on back for more
I walk where the bottles break
And the blacktop comes on back

I live where the neighbors yell
And their music comes up through the floor
I live where the neighbors yell
And their music wakes me up

Where the bottles break, John Gorka, 1991

SABR-toothed

Well they’ve been running around on the flat expanses of the early Holocene lake bed with impressively large machines, whacking down and gathering the soybeans and corn. This puts dirt clods on the roads that cause one on a road bike at dusk to weave and swear, but I digress. The Farmer’s Almanac indicates says that it must therefore be about World Series time, which in turn is just about approximately guaranteed to initiate various comments regarding the role of luck, good or bad, in deciding important baseball game outcomes.

There are several important things to be blurted out on this important topic and with the Series at it’s climax and the leaves a fallin’ now’s the time, the time is now.

It was Bill James, the baseball “sabermetric” grandpa and chief guru, who came up with the basic idea some time ago, though not with the questionable terminology applied to it I think, which I believe came later from certain disciples who knelt at his feet.

The basic idea starts off well enough but from there goes into a kind of low-key downhill slide, not unlike the truck that you didn’t bother setting the park brake for because you thought the street grade was flat but found out otherwise a few feet down the sidewalk. At which point you also discover that the bumper height of said truck does not necessarily match that of a Mercedes.

The concept applies not just to baseball but anything involving integer scores. Basic idea is as follows (see here). Your team plays 162 baseball games, 25 soccer matches or whatever, and of course you keep score of each. You then compute the fraction S^x/(S^x + A^x), where using the baseball case, S = runs scored, A = runs allowed and x = an exponent that varies depending on the data used (i.e. the teams and years used). You do this for each team in the league and also compute each team’s winning percentage (WP = W/G, where W = number of wins and G = games played in the season(s)). A nonlinear regression/optimization returns the optimal value of x, given the data. The resulting fraction is known as the “pythagorean expectation” of winning percentage, claiming to inform us of how many games a given team “should” have won and lost over that time, given their total runs scored and allowed.

Note first that the value of x depends on the data used: the relationship is entirely empirically derived, and exponents ranging from (at least) 1.8 to 2.0 have resulted. There is no statistical theory here whatsoever, and in no description of “the pythag” have I ever seen any mention of such. This is a shame because (1) there can and should be, and (2) it seems likely that most “sabermatricians” don’t have any idea as to how or why. Maybe not all, but I haven’t seen any discuss the matter. Specifically, this is a classic case for application of Poisson-derived expectations.

However the lack of theory is one, but not really the main, point here. More at issue are the highly questionable interpretations of the causes of observed deviations from pythag expectations, where the rolling truck smashes out the grill and lights of the Mercedes.

You should base an analysis like this on the Poisson distribution for at least two very strong reasons. First, interpretations of the pythag always involve random chance. That is, the underlying view is that departures of a given team’s won-loss record from pythag expectation is always attributed to the action of randommness–random chance. Great, if you want to go down that road, that’s exactly what the Poisson distribution is designed to address. Secondly, it will give you additional information regarding the role of chance that you cannot get from “the pythag”.

Indeed, the Poisson gives the expected distribution of integer-valued data around a known mean, under the assumption that random deviations from that mean are solely the result of sampling error, which in turn results from the combination of Complete Spatial Randomness (CSR) complete randomness of the objects, relative to the mean value and the size of the sampling frame. In our context, the sampling frame is a single game and the objects of analysis are the runs scored, and allowed, in each game. The point is that the Poisson is inherently designed to test just exactly what the SABR-toothers are wanting to test. But they don’t use it–they instead opt for the fully ad-hoc pythag estimator (or slight variations thereof). Always.

So, you’ve got a team’s total runs scored and allowed over its season. You divide that by the number of games played to give you the mean of each. That’s all you need–the Poisson is a single parameter distribution, the variance being a function of the mean. Now you use that computer in front of you for what it’s really ideal at–doing a whole bunch of calculations really fast–to simply draw from the runs scored, and runs allowed, distributions, randomly, say 100,000 times or whatever, to estimate your team’s real expected won-loss record under a fully random score distribution process. But you can also do more–you can test whether either the runs scored or allowed distribution fits the Poisson very well, using a chi-square goodness-of-fit test. And that’s important because it tells you basically, whether or not they are homogeneous random processes–processes in which the data generating process is unchanging through the season. In sports terms: it tells you the degree to which the team’s performance over the year, offensive and defensive, came from the same basic conditions (i.e. unchanging team performance quality/ability).

The biggest issue remains however–interpretation. I don’t how it all got started, but somewhere, somebody decided that a positive departure from “the pythag” (more wins than expected) equated to “good luck” and negative departures to “bad luck”. Luck being the operative word here. Actually I do know the origin–it’s a straight forward conclusion from attributing all deviations from expectation to “chance”. The problem is that many of these deviations are not in fact due to chance, and if you analyze the data using the Poisson as described above, you will have evidence of when it is, and is not, the case.

For example, a team that wins more close games than it “should”, games won by say just one or two runs, while getting badly smoked in a small subset of other games, will appear to benefit from “good luck”, according to the pythag approach. But using the Poisson approach, you can identify whether or not a team’s basic quality likely changed at various times during the season. Furthermore, you can also examine whether the joint distribution of events (runs scored, runs allowed), follows random expectation, given their individual distributions. If they do not, then you know that some non-random process is going on. For example, that team that wins (or loses) more than it’s expected share of close games most likely has some ability to win (or lose) close games–something about the way the team plays explains it, not random chance. There are many particular explanations, in terms of team skill and strategy, that can explain such results, and more specific data on a team’s players’ performance can lend evidence to the various possibilities.

So, the whole “luck” explanation that certain elements of the sabermetric crowd are quite fond of and have accepted as the Gospel of James, may be quite suspect at best, or outright wrong. I should add however that if the Indians win the series, it’s skill all the way while if the Cubs win it’ll most likely be due to luck.

The same thing that I want today…

Well I’m sailing away my own true love
I’m sailing away in the morning
Is there somethin’ I can send you from across the sea
From the place that I’ll be landin’

No there’s nothin’ you can send me, my own true love
There’s nothin’ I’m wishing to be ownin’
Just carry yourself back to me unspoiled
From across that lonesome ocean

Oh but I just thought you might like somethin’ fine
Made of silver, or of golden
From the mountains of Madrid
Or from the coast of Barcelona

Well if I had the stars of the darkest night
And the diamonds from the deepest ocean
I’d forsake them all for your sweet kiss
For it’s all I’m wishin’ to be ownin’

And I might be gone a long, long time
And it’s only that I’m askin’
Is there somethin’ I can send you to remember me by
To make your time more easy a-passin’

How can, how can you ask me again?
It only a-brings me sorrow
The same thing that I want today
I’ll want again tomorrow

Oh and I got a letter on a lonesome day
It was from her ship a-sailin’
Sayin’ I don’t know when I’ll be coming back again–
It depends on how I’m feelin’

Well if you my love must think that a-way
I’m sure your mind is a-roamin’
I’m sure your thoughts are not ’bout me
But with the country where you’re goin’

So take heed, take heed of the western wind
Take heed of stormy weather
And yes, there’s something you can send back to me:
Spanish boots of Spanish leather

Boots of Spanish Leather, Bob Dylan

Natural selection, genetic fitness and the role of math–part two

I’ve been thinking some more about this issue–the idea that selection should tend to favor those genotypes with the smallest temporal variations in fitness, for a given mean fitness value (above 1.00). It’s taken some time to work through this and get a grip on what’s going on and some additional points have emerged.

The first point is that although I surely don’t know the entire history, the idea appears to be strictly mathematically derived, from modeling: theoretical. At least, that’s how it appears from the several descriptions that I’ve read, including Orr’s, and this one. These all discuss mathematics–geometric and arithmetic means, absolute and relative fitness, etc., making no mention of any empirical origins.

The reason should be evident from Orr’s experimental description, in which he sets up ultra-simplified conditions in which the several other important factors that can alter genotype frequencies over generations, are made unvarying. The point is that in a real world experimental test you would also have to control for these things, either experimentally or statistically, and that would not be easy. It’s hard to see why anybody would go to such trouble if the theory weren’t there to suggest the possibility in the first place. There is much more to say on the issue of empirical evidence. Given that it’s an accepted idea, and that testing it as the generalization it claims to be is difficult, then the theoretical foundation had better be very solid. Well, I can readily conceive of two strictly theoretically-based reasons of why the idea might well be suspect. For time’s sake, I’ll focus on just one of those here.

The underlying basis of the argument is that, if a growth rate (interest rate, absolute fitness, whatever) is perfectly constant over time, the product of the series gives the total change at the final time point, but if it is made non-constant, by varying it around that rate, then the final value–and thus the geometric mean–will decline. The larger the variance around the point, the greater the decline. For example, suppose a 2% increase of quantity A(0) per unit time interval (g), that is, F = 1.020. Measuring time in generations here, after g = 35 generations, A(35) = F^g = 1.020^35 = 2.0; A is doubled in 35 generations. The geometric (and arithmetic) mean over the 35 years is 1.020, because all the yearly rates are identical. Now cause F to instead vary around 1.02 by setting it as the mean of a normal distribution with some arbitrarily chosen standard deviation, say 0.2. The geometric mean of the series will then drop (on average, asymptotically) to just below 1.0 (~ 0.9993). Since the geometric mean is what matters, genotype A will then not increase at all–it will instead stay about the same.

pstep = 0.00001; probs = seq(pstep, 1-pstep, pstep)
q = qnorm(p=probs, mean=1.02, sd=0.2)
gm = exp(mean((log(q)))); gm

This is a very informative result. Using and extending it, now imagine an idealized population with two genotypes, A and B, in a temporally unvarying selection environment, with equal starting frequencies, A = B = 0.50. Since the environment doesn’t vary, there is no selection on either, that is F.A = F.B = 1.0 and they will thus maintain equal relative frequencies over time. Now impose a varying selection environment where sometimes conditions favor survival of A, other times B. We would then repeat the above exercise, except that now the mean of the distribution we construct is 1.000, not 1.020. The resulting geometric mean fitness of each genotype is now 0.9788 (just replace 1.02 with 1.00 in the above code).

So what’s going to happen? Extinction, that’s what. After 35 generations, each will be down to 0.9788^35 = 0.473 of it’s starting value, on average, and on the way to zero. The generalization is that any population having genotypes of ~ equal arithmetic mean (absolute) fitness and normally distributed values around that mean, will have all genotypes driven to extinction, and at a rate proportional to the magnitude of the variance. If instead, one genotype has an arithmetic mean fitness above 1.00 a threshold value determined by it’s mean and variance, while all others are below it, then the former will be driven to fixation and the latter to extinction. These results are not tenable–this is decidedly not what we see in nature. We instead see lots of genetic variation, including vast amounts maintained over vast expanses of time. I grant that this is a fairly rough and crude test of the idea, but not an unreasonable one. Note that this also points up the potentially serious problem caused by using relative, instead of absolute, fitness, but I won’t get into that now.

Extinction of course happens in nature all the time, but what we observe in nature is the result of successful selection–populations and species that survived. We know, without question, that environments vary–wildly, any and all aspects thereof, at all scales, often. And we also know without question that selection certainly can and does filter out the most fit genotypes in those environments. Those processes are all operating but we don’t observe a world in which alleles are either eliminated or fixed. The above examples cannot be accurate mathematical descriptions of a surviving species’ variation in fitness over time–something’s wrong.

The “something wrong” is the designation of normally distributed variation, or more exactly, symmetrically distributed variation. To keep a geometric mean from departing from it’s no-variance value, one must skew the distribution around the mean value, such that values above it (x) are inverses (1/x) (mean/x) of those below it–that is the only way to create a stable geometric mean while varying the individual values. [EDIT: more accurately, the mean must equal the product of the values below the mean, multiplied by the mean divided by the product of the values above the mean, but the values will be skewed in any case.] Mathematically, the way to do so is to work with the logarithms of the original values–the log of the geometric mean is designated as the mean of normally distributed logarithms of the individual values, of whatever size variance one wants. Exponentiation of the sum of the logarithms will equal the product of the fitness series.

Hopefully, what I’m driving at is emerging. If the variance structure must obey this mathematical necessity to preserve a genotype’s mean fitness at 1.00, while still allowing the individual series values to vary…then why should we not expect the same to hold true when the mean geometric fitness is not equal to 1.00? I would argue that that’s exactly what we should expect, and that Gillespie’s original arguments–and Orr’s, and others’ summaries thereof–are not particularly defensible theoretical expectations of what is likely to be happening in nature. Specifically, the idea that the variance in fitness around an arithmetic mean should necessarily arise from symmetrically (normally) distributed values, is questionable.

As alluded to above, there is (at least) a second theoretical argument as well, but I don’t have time to get into it now (nor for this one for that matter). Suffice it to say that it involves simultaneous temporal changes in total population size and selective environments. All this without even broaching the entire hornet’s nest of empirically testing the idea, a topic reviewed five years ago by Simons. For starters, it’s not clear to me just how conservative “bet hedging” could ever be distinguished from the effects of phenotypic plasticity.

References

Simons, A.M. (2011) Modes of response to environmental change and the elusive empirical evidence for bet hedging. doi:10.1098/rspb.2011.0176

Other references are linked to in the previous post.

On natural selection, genetic fitness and the role of math

I really am not quite sure what to make of this one.

Last week at the blog Dynamic Ecology it was argued that natural selection behaves like a “risk-averse” money investor. That is, assuming that fitness varies over time (due to e.g. changing environmental variables or other selective factors), natural selection favors situations in which the mean fitness is maximized while the variance is minimized. The idea is explained in this short paper by Orr (2007), whose goal was to explain previous findings (Gillespie, 1973) intuitively. This presumes that knowledge of investor behavior is commonplace, but for my money, an examination of the math details and assumptions is what’s really needed.

This conclusion seems entirely problematic to me.

Continue reading

The memo from above

Late last week a useful memo came down from the powers that be here at The Institute that I thought might prove informative regarding the inner workings of a powerful think tank, which The Institute most certainly is, in spades.

To: Personnel engaged in primarily predictive and related prognosticatory research
From: The PTB
Date: September 30, 2016

We wish, as always, to express our appreciation for the excellent, ongoing work that continues to move The Institute steadily forward, at roughly the cutting edge of science, or at least at the cutting edge of rough science. Accordingly, we take this opportunity to remind everyone of the basic tenets that have guided our various predictive activities in the past:

(1) Future events and event trajectories, notwithstanding our best efforts, continue to display an aggravating uncertainty, and it is remarkable just how easily this fact avoids taking up residence in our conscious minds.

(2) The future occupies a fairly large, and apparently non-diminishing, portion of the temporal spectrum.

(3) Given the above, it is incumbent upon us all to keep in mind the following:
(a) Phrasing article titles with undue certainty, given the actual knowledge of system behavior, while understandable from a science culture perspective, may be counter-productive in a larger context. Fortunately, many non-scientists tend to seize upon such titles and, lacking proper restraint, make them even worse, often proclaiming future event x to be a virtual certainty. Without the ability to re-direct attention to these exaggerations, often originating from the press and various activist groups, undue attention to our own excesses, for which we have no readily available excuse, could become noticeably more uncomfortable. This possibility is not in the best interest of either science or The Institute.

(b) Science doesn’t actually “prove” anything, proof being a rather archaic and overly harsh concept–a “bar too high” if you like. Rather, science is in the business of “suggesting” that certain things “may” happen somewhere “down the road”. Science, when you boil it right down to nails, is really nothing but a massive pile of suggestions of what might happen. The pile is the thing really and our goal is to contribute to it. Popper is entitled to his opinion but frankly, The Institute is not so arrogant as to assume the right of making judgments on this, that or the other members of said scientific pile.

(c) It is hoped that the relation of points (a) and (b) above do not require elaboration.

Sincerely,
The PTB

This is an excellent reminder and I have, personally, tacked this memo to the wall in front of my workstation, with intent to glance at it every now and then before tacking something else over top of it.