Blogroll

Web Links

Sitemeter


W3 Counter


« Flexing Your New Bayesian Muscles | Main | Tropical Stonehenge: A Case Study In Dembski's Explanatory Filter »

June 28, 2006

Comments

Jeff:

It's not that I doubt the 7.8% answer -- it's that I reject that answer. What I think is happening here is that you're applying Bayes Theorem to a circumstance in which it does not apply, something Kevin points out in your last thread.

The critical design flaw is that you're conflating the at-large "all women over 40" group with the "women who have had mammograms" group; basically, you're assuming that all women over 40 have mammograms (and that a very large percentage of them had positive mammograms); while I'd like to believe the former is true, I doubt it is. Further, in order to fill in the gaps and apply Bayes Theorem, we need to know what percentage of women who have mammograms have a positive result, irrespective of whether or not they actually have breast cancer -- I suspect we could probably interpolate that number from the other two.

Instead, we get a highly inaccurate result from Bayes Theorem because we haven't given it enough information to yield an accurate one. And, as it turns out, we don't even need Bayes, since the 9.6% figure tells us everything we need to know.

If your 9.6% "false positive" statistic is correct, then 90.4% of women who have positive mammograms are likely to have breast cancer. 90.4% != 7.8% If the 7.8% number is correct, then 92.2% of positive mammograms are false. 9.6% != 92.2% See the problem?

OK, upon further review, I need to review things further. Muddah, Faddah, kindly disregard this lettah! :)

I see where I have gone astray; namely, that I misread the 9.6% stat. It's not that 9.6% of positive mammographies are false; it's that 9.6% of women without breast cancer who have mammograms will get a positive result. That's a big (and hugely important) difference. So clearly, my 90.4% number is the result of my faulty logic.

What we're left with, if Bayes truly applies and if the statistics are correct, is that mammograms are horribly unreliable. Which sucks. Roughly 10.4% of women who have mammograms will get a positive result, even though roughly 9.6% of these don't have breast cancer. The percentage of positives which are false isn't 9.6% -- it's 92.3%!

Now that I've figured out where I went astray, I'm ready to accept the 7.8% figure.

Mea culpa. :)

Thanks for participating, Tom.

I honestly expected no one to try ... but you stepped right up to the plate.

I should never have doubted you ... oops, there is that word again ;)

"why all the interest in Bayes all of a sudden?"
I could quibble on "all of a sudden", but wouldn't bother. Indeed I can understand why I wouldn't bother commenting at all, except for the fact that my experience might provide part of an answer.

I think my "Participatory Deliberation" project (http://bentrem.sycks.net/gnodal/ is a set of files I just salvaged) was inspired and energized by my having read John Willinsky on bringing publically funded research more into the public domain. Without contradicting that, I have another source of inspiration: the idea that "public knowledge" can be in-formed by public opinion reliably and credibly if we apply solid web-tech and something like Bayesian filtering / weighting.

If most everybody is wrong about something, i.e. holding false beliefs, then I wanna know how and why ... when the manure piles up it's reasonable to suspect that there might be a pony around. Or something.

The comments to this entry are closed.