A bunch of granite rocks strewn over an Amazon hilltop in northern Brazil. Random rock formation, or ancient observatory?
The so called "" provides a nice case study for Willliam Dembski's explanatory filter.
Statistician philosopher, author, blogger and ID apologist, Dr. William Dembski, has developed a useful heurestic for inferring intention (i.e. design) in an object. Dembski's write up is at the leaderu.com web site.
The graphic on the left, which I took from IDEA web site, lays out the process. It is quite straightforward.
"This explanatory filter recognizes that there are three causes for things: chance, law and design. The premise behind the filter is the positive prediction of design that designers tend to build complex things with low probability that correspond to a specified pattern. In biology, this could be an irreducibly complex structure which fulfills some biological function. This filter helps ensure that we detect design only when it is warranted. If something is high probability, we may ascribe it to a law. If something is intermediate probability, we may ascribe it to chance. But if it is specified and low probability, then this is the tell-tale sign that we are dealing with something that is designed."
In the case of Tropical Stonehenge, law cannot explain why these rocks would end up in a circle on a hill ... with each rock being placed at regular intervals. There is no known law of physics or geology which would cause that.
The next question is chance. What are the odds that these rocks (some nine feet tall) would be spaced at regular intervals? What are the odds that they would form a circular shape on their own?
The probability is moving from intermediate toward low. But it could still be chance.
Do the rocks form any meaningful pattern?
That is the key question in inferring design. No pattern, hard to infer design.
As it turns out, we do find a specified pattern.
'On the shortest day of the year — Dec. 21 — the shadow of one of the blocks, which is set at an angle, disappears.
"It is this block's alignment with the winter solstice that leads us to believe the site was once an astronomical observatory," said Mariana Petry Cabral, an archaeologist at the Amapa State Scientific and Technical Research Institute. "We may be also looking at the remnants of a sophisticated culture."'
The third decision point has been met. Ka-boom. Archaeologists infer design.
This ties in with my Bayesian series as well. How so? Because we learn this from the article :
Anthropologists have long known that local indigenous populations were acute observers of the stars and sun.
In fact, a similar observatory was discovered in Lima Peru last month. It is called the oldest observatory in the Western Hemisphere.
So, given that these ancient South American cultures studied the position of the sun and stars, and given that we have found similar structures on the same continent, what is the likelihood that this structure in Brazil was built by an ancient civilization? Recognize the conditional probability in that statement? Bayes Theorem is a ratio of conditional probabilities. In theory, we could set up a Bayesian formula that gives us a posterior probability of the theory that these rocks are not a random act of nature ... but put their for a purpose.
The problem is, the Explanatory Filter is pretty much useless. Unless you can know without doubt what is and is not designed, it's useless to you. And of course, if you did know that, you wouldn't need it. I like the way EvoWiki puts it: It's a great example of the Sherlock Holmes Fallacy.
Posted by: tgirsch | July 06, 2006 at 00:39
Not another cute fallacy courtesy of EvoWiki. :)
What the chaps at EvoWiki don't understand is specification. Chance can explain complexity -- it can't explain specification combined with complexity.
The EvoWiki post omits any mention of specification, of course -- thereby committing a real fallacy -- the straw man argument.
In reality, the EF is quite handy ... especially when used in conjunction with Bayes. If offers evidence that modifies prior belief and yields an adjusted belief.
We all use the explanatory filter in a Bayesian way subconciously -- even the EvoWiki's use it. So to say it isn't useful is ridiculous.
Posted by: Mr. Dawntreader | July 06, 2006 at 13:14
Jeff:
The problem is that you're now tossing around vaguely defined concepts, rather than established scientific principles. And that's a no-no, from a strictly scientific perspective. What exactly is "specification" anyway? How can we positively identify this? EvoWiki doesn't mention this because until Dembski gives a non-circular definition of what it is, it's not worth mentioning.
The proof is in the pudding, in that Dembski himself has provided neither a scientifically rigorous definition of these terms, nor a scientifically-vetted example of the explanatory filter in action. Which is to say, it's not science, but pseudoscience -- non-science wrapped in scientific-sounding terminology.
Not to mention the fact that you're discrediting the fallacy based on the source (EvoWiki) rather than on the merits. The Sherlock Holmes fallacy is a legitimate critique, in my estimation.
Sure we use the explanatory filter in our intuition, but intuition is not science.
Sorry, but the EF in its current form is useless. It requires you to know the answer to the question you're trying to answer. Or, at best, it's an argument from ignorance, where everything that you can't definitively explain is assumed to be "design."
Posted by: tgirsch | July 07, 2006 at 15:07
It is an argument from ignorance to those who are ignorant of the real argument (sorry, couldn't resist the play on words ... no offense intended)
Using the Bayesian/EF is an argument against a less probable hypothesis in favor of a more probable hypothesis.
In the Tropical Stonehenge case, it is a rejection of the "natural forces put the stones in a circle" hypothesis in favor of the "the stones were assembled that way by ancient astronomers" hypothesis.
One hypothesis is clearly more probable -- yes, a tornado, earthquake or comet collision could have done it, but it is unlikely. We ought to reject natural forces in favor of a more logical explanation ... for now, anyway.
Clean and elegant.
Posted by: Mr. Dawntreader | July 07, 2006 at 17:55
Jeff:
Using the Bayesian/EF is an argument against a less probable hypothesis in favor of a more probable hypothesis.
Except that's not what Dembski does at all. And it's not what you're doing either (at least, not with respect to design in the larger context. You're not assigning rigorous probabilities to X and Y and then saying "because X is more probable than Y, we should prefer X." What you're doing instead is simply arguing that because X is (or seems) highly improbable, the answer must lie elsewhere.
But aside from the fact that you simply cannot meaningfully assign a probability to the proposition that "there's a designer for all of creation," there's another problem here: even if you take as given that X is less probable than Y, this doesn't prove that X is false and Y is true. It leads you to suspect Y, but it proves nothing.
That's the problem with the statistical approach to ID, as I see it: it's based on a horribly flawed conception of what statistics can (and, more importantly, cannot) tell us. A mathemetician like Dembski ought to know better (and, privately, I suspect he does).
The case to be made for "tropical Stonehenge" being designed isn't that "it's highly improbable that it wasn't designed." It's that it bears many of the hallmarks of things that we know to be human-designed and human-built. (Notice that probabilities aren't even mentioned by any of the principles.) We have things which we know are human-designed, and we have things which we know are not. So we have bases for comparison, pro or con, and can make meaningful judgments based on this.
In the case of capital-D Design, however, we have no such basis for comparison. We have no lifeform that we know has been designed, versus one we know hasn't been, to use as bases for comparison.
This becomes even more problematic, because the point that most ID proponents want to get to is that absolutely everything is designed -- it's all part of God's plan. What that means, essentially, is that there's no such thing as "undesigned."
Bottom line is, you need to abandon the idea that probabilities can tell us anything meaningful about the past. All they can do is suggest places to look, but they cannot prove (or disprove) anything. I really can't grasp why you're so in love with probabilities with respect to this.
Posted by: tgirsch | July 07, 2006 at 18:14
I am going to have to disagree with you on just about every point.
Dembski is not using Bayes -- I am. That part we agree on.
Re: proof
Unless I am mistaken, all inductive arguments are probabilistic arguments. None of them are proven true with 100 percent certainty. By your standard of 100 percent certain proof, most of science would be swept away.
Short of a syllogism utilizing deductive logic, you can't be certain anything is absolutely true -- unless you are omniscient or you are Sherlock Holmes.
I can state some things that are absolutely true -- if logic is true -- but they are not the sorts of things that we are really interested in.
By the way, we know certain things are human-designed because they bear an important property -- called specification -- and usually a second property, called complexity.
Back to Dembski again.
Your critique of the lack of rigorous math is well founded. Dembski shares it. The idea is to take the soft judgment and empiricize it using rigorous math. Right now, in archaeology anyway, all we have is people making hunches. It would be nice to move away from hunches to models -- which I suspect, the next generation will begin to do.
The interesting thing about Tropical Stonehenge is that it demonstrates the EF approach. The part about it also demonstrating a Bayesian approach was my observation -- Dembski actually does not like Bayes.
By the way, we don't need to identify a designer in order to know something was designed. I don't know who designed this laptop though I can clearly see it was designed. It doesn't matter if a human or an elf did it for me to make that call. How do I know? Because it has the property of specificity and complexity.
I know from experience that random forces of nature never produce things like laptops.
Posted by: Mr. Dawntreader | July 07, 2006 at 18:54
Jeff:
Unless I am mistaken, all inductive arguments are probabilistic arguments. None of them are proven true with 100 percent certainty.
Well, no. You're confusing the general term probability with the much more specific concept of confidence. Our confidence in our predictions and/or explanations, particularly of past events, is not based on the probability that those events occurred, which is what you seem to be arguing. I'm 95% sure of this. ;)
In any case, ID models don't deal in confidence at all -- in fact, they're not willing to entertain any possibility that they're wrong about Design.
By the way, we know certain things are human-designed because they bear an important property -- called specification -- and usually a second property, called complexity.
Maybe so, but I don't know. And until someone comes up with a scientifically rigorous definition of "specification" (or of "complexity"), we can't know. And again, Dembski's work is deeply flawed. Don't just take my word for it, though:The concept of specified complexity and related work is widely regarded as unsound.[1][2][3] One study states "Dembski's work is riddled with inconsistencies, equivocation, flawed use of mathematics, poor scholarship, and misrepresentation of others' results.".[4] Another objection concerns Dembski's calculation of probabilities. According to Martin Nowak, a Harvard professor of mathematics and evolutionary biology "We cannot calculate the probability that an eye came about. We don't have the information to make the calculation".[5]When Dembski's mathematical claims on specific complexity are interpreted to make them meaningful and conform to minimal standards of mathematical usage, they usually turn out to be false. Dembski often sidesteps these criticisms by responding that he is not "in the business of offering a strict mathematical proof for the inability of material mechanisms to generate specified complexity".[19] Yet on page 150 of No Free Lunch he claims he can prove his thesis mathematically: "In this section I will present an in-principle mathematical argument for why natural causes are incapable of generating complex specified information." Others have pointed out that a crucial calculation on page 297 of No Free Lunch is off by a factor of approximately 1065.[20][References in the link.]
Notice that these aren't generic critiques of Dembski's work; they point out specific flaws, many of them critical.
But once you peel back the layers of oft-debunked mathematical mumbo-jumbo, Dembski's definition of "specified complexity" boils down to "something that can't have occurred naturally." Which, if you knew that, you wouldn't need it as a tool to determine whether or not it was designed. What's "designed?" Something that has "specified complexity." What's "specified complexity?" It's "design."
I don't know who designed this laptop though I can clearly see it was designed. It doesn't matter if a human or an elf did it for me to make that call. How do I know? Because it has the property of specificity and complexity.
You say this, but it isn't really true. You recognize your laptop as designed (and by humans, no less), because it's similar to other things (like TVs and typewriters) that you know to be designed by humans, and dissimilar to other things (like, for example, trees and dogs), that you know are not so designed. Again, you have a basis for comparison. And you know that if you wanted to, you could go to a factory and watch laptops being built. And you could probably even take yours apart and put it back together again, something you'd have a somewhat harder time doing with a tree or a dog.
I know from experience that random forces of nature never produce things like laptops.
Well, never say never. :) And in a way, nature did -- it just took the extra step of producing humans, who then produced the laptops. :)
Posted by: tgirsch | July 08, 2006 at 00:50
Oh, and the above "factor of 1065" should be "factor of 10^65" -- apparently your blog doesn't recognize the "sup" tag.
Posted by: tgirsch | July 08, 2006 at 00:58
Jeff
What is interesting about the "test" Dembski lays out is how it completely ignores the physical realm. Something looks designed to Dembski, so it must be designed (and without a rigours definition fo specification, that is what his test boils down to). There is no room there for the possibility that physical forces generated the result.
Take a cloud that looks like a duck. According to Dembski's test, it MUST be designed, becasue it looks desgined. But the physics of cloud formation are pretty well understood and its known that those rules allow for such creations, especially when we add in the tendency of humans to seek out patterns, even when knonw exist (something else that Dembski's definition appears to ignore). The rock formatiosn in the badlands are nother good example - -they look like sculptures, but scientists can see the tell tale signs of natural erosion and known of the signs of tools being used on them. But, again, using Dembski's test, those rocks should be labled as designed.
By ignoring the physical world, Dembski immediately makes his test useless, nothing more than a game of probabalitites and "with my little eye, I spy ..."
Posted by: kevin | July 08, 2006 at 10:19
Egad, a bad Kevin comment day. ;) To translate from "Kevin" into "English," I believe "knonw"/"known" = "none." :)
Posted by: tgirsch | July 08, 2006 at 19:35
What's the chance that evolutionary theory is right?
Posted by: PDM | July 11, 2006 at 22:37
Kevin,
None of your examples would have passed step two in the EF. All of them are easily explained via chance or natural laws. None of your examples contained any degree of specification.
T,
"Our confidence in our predictions and/or explanations, particularly of past events, is not based on the probability that those events occurred, which is what you seem to be arguing."
Inductive arguments cannot be 100 percent certain ... by definition. Science is an inductive enterprise. It consists of models, predictions, observations, and reasoning ... which results in explanations. All of that uses strict inductive logic.
Posted by: Mr. Dawntreader | July 14, 2006 at 01:00
Let's resume the discussion on Dembski's work once we have completed the Kuhnian shift toward the mathematics of design detection.
Posted by: Mr. Dawntreader | July 14, 2006 at 01:23
Jeff:
None of your examples would have passed step two in the EF.
So, just to get you on record, a cloud that looks like a duck is clearly not designed, correct?
But this still illustrates a key flaw in the EF. Applying the EF now, knowing what we know about cloud formation, rules out design in that case. But go back several hundred years, before we understood clouds, and apply the EF. There was no natural process we understood that generated the clouds, and thus, according to the EF, they must have been designed. People of times past were missing information on natural processes that were necessary to get the right result.
Fast-forward back to today. What information about natural processes are we currently missing? What would the EF mistake for design today that we might be able to explain tomorrow?
That's what's wrong with the EF: it relegates anything that we cannot currently explain into the realm of "designed." It conflates "unexplained" with "inexplicable."
Inductive arguments cannot be 100 percent certain ... by definition.
I don't believe I've ever disputed this. But you seem to assume that "inductive argument" = "probabilistic argument," which is not true.
Let's resume the discussion on Dembski's work once we have completed the Kuhnian shift toward the mathematics of design detection.
This assumes a great deal, don't you think? :)
Posted by: tgirsch | July 14, 2006 at 17:38
"But you seem to assume that "inductive argument" = "probabilistic argument," which is not true."
We are getting stuck in semantics.
a) Certain = 100 percent.
b) Not certain means < 100 percent.
Science specializes only in type b. It offers explanations that are probably true -- but not certainly true.
Probably and probability come from the same root because they are related. Whether or not it is stated explicitly, you are assigning a probability to something being true when you say it is probably true.
Posted by: Mr. Dawntreader | July 16, 2006 at 15:55
"People of times past were missing information on natural processes that were necessary to get the right result.
Fast-forward back to today. What information about natural processes are we currently missing?"
An inference combines reason (inductive logic) with knowledge. We act on what we know ... which seems quite plausible and reasonable to most rational people.
You are quite correct in stating that it may be that there is some unknown process that generates specified complex information. But to base our conclusions and inferences on things we don't know is absurd. That is truly arguing from ignorance -- something which you disdain.
Based on what we know, intelligent agents generate specified complex information. In that sense, we are using analogies. Human agents generate things like laptops.
So ... if we see other specified complex things, it is reasonable to infer design ... because, as far as we know, intelligent agents are the only sorts of things which generate specified complex information.
It is an inference.
You are quite correct. As soon as someone demonstrates a random process generating specified complex information, there is no longer any reason to infer design.
So far, no one has done that.
Therefore, based on what I know, rather than what I don't know, I am rationally justified in inferring design.
Posted by: Mr. Dawntreader | July 16, 2006 at 16:07
Arguing that present day people know more than people who lived in the past gets you nowhere -- because the same argument can be used to invalidate all of the knowledge you currently hold to.
Perhaps evolution will be completely overturned next century. Perhaps you shouldn't believe in it because future scientists will know more than today's scientists and it may be that evolution is a complete farce and tomorrow's theory will make today's theory look like duck clouds.
You simply can't use your argument to get anywhere. The rational thing to do is to use what we know today, make our observations, and make inferences.
Posted by: Mr. Dawntreader | July 16, 2006 at 16:15
Jeff:
Probably and probability come from the same root because they are related.
Uhh, yeah, they're related, but they're still different.
But to base our conclusions and inferences on things we don't know is absurd.
Which is precisely what's wrong with the explanatory filter! Because we don't know what natural process could have caused X, according to the EF, we conclude that it's designed -- based on our ignorance. Rather than simply stating "we don't know what caused X." It is the IDist who bases his conclusions on ignorance.
And once again, you continue to insist on "specified complex information" being the key to it all, when Dembski himself has failed utterly to provide a scientifically rigorous definition of what, precisely, that is. Other than, of course, "something that had to have been designed by an intelligent agent."
As soon as someone demonstrates a random process generating specified complex information, there is no longer any reason to infer design.
It's mind boggling that you don't see the problem here. You're essentially arguing that everything must be presumed to be designed until we can conclusively prove that it hasn't been. But why should anyone make that assumption?
You claim that you're avoiding making a conclusion based on your ignorance, when in fact you're doing precisely the opposite.
Perhaps evolution will be completely overturned next century.
Perhaps so. But as of right now, today, it's hands-down the best explanation we have. So much so, that nothing else comes even remotely close.
See, you're trying to falsely paint my position as something along the lines of "we should reject what we think is true, because someone in the future might prove it wrong," when that's not at all what I'm saying. I'm saying that we should go with the best explanation we currently have, and stick to it until a better one comes along (if ever). If we don't like it, tough, or work to prove it wrong, but if you care about truth, you have to go where the evidence leads, not where we want it to go.
And you stil haven't answered me about the cloud that looks like a duck. :)
Posted by: tgirsch | July 16, 2006 at 19:50
Read this.
After you have completely read it from start to finish, then we can resume.
Posted by: Mr. Dawntreader | July 16, 2006 at 23:06
41 pages, eh? :) Let's note that length != scientific rigorousness, at least not necessarily. I'll give him credit for trying though, and I'll try to wade through it. I'm not sure, frankly, that my Math Skillz(tm) are up to the task. (Which is to say, I suck at statistics.)
Note, too, that this paper is admission by Dembski that his prior definitions were insufficient.
What I'm wondering is if Dembski gives any succinct examples of things that look for all the world like they were designed but weren't, and how we can know this with reasonable certainty. In other words, the holy grail of ID: how can we differentiate apparent design from actual design? What would be truly compelling would be him giving examples of things that he would have bet the farm were designed, but after shoving them through the EF, he concluded that they were not.
And you still haven't answered me about the duck cloud. :)
Posted by: tgirsch | July 17, 2006 at 23:23
Read Dembski. He goes to great lengths to set the bar of false positives impossibly high (i.e. rendering something designed that is not designed).
So, just to get you on record, a cloud that looks like a duck is clearly not designed, correct?
Correct. After you read Dembski, you'll better understand how we know that.
Posted by: Mr. Dawntreader | July 18, 2006 at 06:40
That's an interesting concession. So there do exist things that were not designed by God. This gives us somewhere to go, but it's an unusual position for an evangelical Christian to take.
Posted by: tgirsch | July 18, 2006 at 13:22
"So there do exist things that were not designed by God."
Sure. Laptops. *ducks*
Let me know when you make it through the Dembski article. I am not buying this melarky about you being bad at math either ;)
Posted by: Mr. Dawntreader | July 18, 2006 at 18:18
Let me rephrase: there do exist things in nature that were not designed by God. :) That concession surprises me, because it's more consistent with a clockwork-God of Deism than it is with a personal-God of Christianity.
I am not buying this melarky about you being bad at math either
That's because:
A) You haven't seen my grades, especially in statistics, which I barely passed.
B) You haven't seen me try to make change.
Posted by: tgirsch | July 19, 2006 at 15:30
You also grossly overrate my attention span. :)
Posted by: tgirsch | July 19, 2006 at 15:30
"That concession surprises me, because it's more consistent with a clockwork-God of Deism than it is with a personal-God of Christianity."
So I throw a rock into a still pond and see the ripples form, and I attribute those ripples to physical laws, and I am more consistent with Deism?
That is a stretch.
Are you sure you are not confusing Christianity with pantheism?
Posted by: Mr. Dawntreader | July 19, 2006 at 17:20