Reverend Bayes takes Sir Ronald to the mat! Wait, hold on..is that Father Mendel with the take-down?!

In trying to wrap my limited head around apparently unlimited Bayesian statistical practice, Steve Walker pointed me to this article by Andrew Gelman and David Weakliem. The authors critique a study (published in the Journal of Theoretical Biology*), in which it was claimed that highly “attractive” (physically) people have a skewed gender ratio in their children, to the tune of somewhere between 1.05:1 and 1.26:1, girls:boys, depending on how you compute the ratio, based on a sample from about 3000 couples.

Well that’s eye-catching, given that we know that chromosomes in diploids, including the X and Y (gender) chromosomes in humans, typically segregate 1:1 during meiosis. We also know that if you take any large sample of humans, you will get very close to a 1:1 female:male ratio of offspring. The results were interesting enough for Psychology Today to publicized the study, for whatever reason. I mean after all, it’s in J. Theoretical Biology, so it must be valid, presumably with a solid “theoretical” basis, right?

Negative, as Gelman and Weakliem explain.

However, the latter are only using the JTB study to illustrate a larger issue: the existence of certain weaknesses of “frequentist” statistics (i.e. “traditional” methods, developed by Sir Ronald Fisher et al), relative to Bayesian statistics. The one they concentrate on is that frequentist methods are not good at detecting small effects in small sample sizes. OK, small sample sizes most certainly present a limitation to what effect you can detect, no question about it. They then argue that a Bayesian approach can avoid/minimize this problem, which they demonstrate by setting the mean of a “prior” distribution of gender ratios to 1.0, and then evaluating the likelihood of getting a gender ratio of 1.05:1. They find that it’s not very likely (it’s only slightly more likely than not that the ratio is above 1.0): the results of the JTB study are thus probably spurious, which is to say, not representing some truly novel or important biological finding.

I can hear Marv Albert call it now: “And it’s Reverend Bayes for the win, yesssssss!” Hold on please, and calm down Marv, we may not quite be talking about Michael Jordan here.

To wit, what exactly is the basis for setting the mean of the prior distribution to 1.0? Why not 1.047 or 1.26? By Bayesian rules, a prior represents some type of outside information available on the issue at hand. In this case (setting aside the “subjective Bayesianism” that says you can use any value you “believe” in!) that information has to be either (1) observational evidence on chromosome segregation or (2) observed gender ratios in some much larger population of children. Fortunately, the two of these are both super-abundant and one causes the other, so pick either one.

The Reverend wasn’t the one who started us down the road of understanding inheritance patterns, it was instead Father Gregor Mendel, some 100 years later with his pea plants, followed closely by the work of Walther and Bateson who provided explanations of Mendel’s results on a definite physical basis (i.e., the existence of chromosomes and meiosis, and further studies of inheritance patterns respectively). You don’t need Bayesian analysis to tell you that chromosomes segregate 1:1 in humans, which is the key piece of information here. And you most certainly have no underlying reason whatsoever to expect that “attractive” people will deviate from the human norm to the tunes of from 1.05:1 to 1.26:1. The problems with the JTB study are just simply sample size issues, not really “frequentist vs Bayesian” issues at all. The Reverend can (maybe?) help you some by introducing some conditional probability, but only because he can make use of the really critical information provided ultimately by the Father and his intellectual descendents. Do I really need Bayes’ theorem to tell me that a 1.05:1 girl:boy ratio in a small sample size is not likely to represent something scientifically important?

Disclaimer: Notwithstanding any of this, I’m in no way set against Bayesian approaches to statistics, and in fact I strongly favor conditional probability in general, which is what Bayesian approaches are. But I am set against using the wrong reasons to justify why it’s putatively more valid than Fisher’s approaches in certain situations.

* Kanazawa, S. (2007). Beautiful parents have more daughters: A further implication of the generalized Trivers-Willard hypothesis. Journal of Theoretical Biology 244:133–140.

Advertisements

6 thoughts on “Reverend Bayes takes Sir Ronald to the mat! Wait, hold on..is that Father Mendel with the take-down?!

  1. Hi Jim. Thanks for engaging so much with my post. But I still disagree quite a bit with your take. Mostly I have a problem with the adversarial tone that pits Bayes and Fisher.

    “You don’t need Bayesian analysis to tell you that chromosomes segregate 1:1 in humans”

    Of course this statement is true. What is also true is that you can use this information about chromosomes when conducting a Bayesian analysis. Also true: Bayesian methods aren’t the only options for including prior information, as you point out. Use Bayes if you want, or not. But not needing Bayes doesn’t imply that Bayes either should or shouldn’t be used.

    “And it’s Reverend Bayes for the win, yesssssss!”

    Now this is exactly the kind of thing I hate. And I should also point out that Gelman himself also hates this kind of thing (http://andrewgelman.com/2012/11/16808/). Gelman’s position seems to me to be that he has found Bayesian methods to be very useful in his own work, and frequently acknowledges that other approaches are also often very useful. He speaks of a methodological attribution problem where people are too quick to attribute research successes to the methods used. I just want to discourage the idea that Gelman is looking for the ‘win’, because he has written explicitly that he is not. (for a pointer to some of his writing on this subject see: http://observationalepidemiology.blogspot.ca/2011/01/andrew-gelman-on-methodological.html).

    I read the disclaimer to imply that I was claiming that this one study proves why Bayes is better than Fisher. I do not mean to imply this. This study is just one applied statistics story amongst many. Here, this particular Bayesian analysis happened to make more sense than this particular Fisherian analysis. That’s all. There are many stories that go in the opposite direction. And many stories in which many different Bayesian and different frequentist approaches are tried, as well as combinations of the two approaches.

    To be as clear as possible, what I am pushing back against is the idea that the paradigm of competing Bayes versus Fisher is still relevant. I do not believe that it is. Rather, I believe that both traditions have left us with a lot of useful tools and ideas, as well as some lemons (e.g. Bayesian updating in complex systems, p-values for small effects in small samples). Here’s another way of looking at it: http://stevencarlislewalker.wordpress.com/2012/10/23/basis-functions-for-locating-statistical-philosophies/

    • These are excellent points Steve, and thanks for those links as well. As for the phrasing, I just felt like having a little fun with the whole Bayes/Fisher “controversy” that seems to exist in some camps. I myself actually don’t have a horse in this race, and much like you and Andrew Gelman, I see the strong merits of picking the best qualities of both approaches, as the analysis dictates. I’m certainly not any kind of expert in Bayesian, by several country miles, but I can still see the merits of the general approach if used carefully.

      Definitely not intending to imply that you meant this Gelman et al article proved Bayes to be a universally superior method, I want to make that clear. Nor do I think Andrew Gelman thinks that either; I know he was just using it as an example. I think he could have picked a better one, but I’m not all hung up on that either. Also, his article makes it pretty clear that if the JTB authors had done a little homework (and reviewers had done their job), they’d know that a rate from 1.047 to 1.26 was highly suspicious. So he acknowledges that it’s not just a problem with those authors’ use of Fisher’s methods, it’s deeper than that.

    • Second thoughts. The more I mull this example by Gelman and Weakliem over, the less I like it, although I’m not sure if my issues are with their example only or Bayesian approaches generally, because I’m not sure how representative of Bayesian approaches their example really is. I hope to get some time to discuss it more, but that seems unlikely. I’ll just say for now that their example has not increased my confidence in the usefulness of Bayesianism in addressing the type of problem in their example.

    • Actually I think I have a terrific example of the power of Bayesian analysis, w.r.t. my previous time series piece and detection of spurious trends. No time to write it up though.

  2. Pingback: There is no Theorem but Bayes’ and Laplace is His Prophet | Alea Deum

Have at it

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s