The Wedge Document

The Wedge is more than 15 years old, and was written by a man who is long retired from the ID community. What one man’s motives were are irrelevant to the science of ID Theory. Phillip Johnson is the one who came up with the Wedge document, and it is nothing other than a summary of personal motives, which have nothing directly to do with science. Johnson is 71 years old.  Johnson’s views do not reflect the younger generation of Intelligent Design (ID) Theory advocates who are partial to approaching biology from a design perspective.

Philip Johnson

Philip Johnson is the original author of the Wedge Document

Some might raise the Wedge document as evidence that there has been an ulterior motive. The Discovery Institute has a response to this as well:

The motives of Phillip Johnson are not shared by myself or other ID advocates, and do not reflect the views or position of the ID community or the Discovery Institute. This point would be similar to someone criticizing evolutionary theory because Richard Dawkins would have a biased approach to science in the fact that he is an atheist and political activist.

I. THE WEDGE AND POLITICAL VIEWS OF THE DISCOVERY INSTITUTE ARE A SOCIAL AND IDEOLOGICAL ARGUMENT IRRELEVANT TO THE SCIENTIFIC METHOD.

Some critics would contend the following:

“With regards to how this is relevant, one part of the Discovery Institute’s strategy is the slogan ‘teach the controversy.’  This slogan deliberately tries to make opponents look like they are against teaching ‘all’ of science to students.”

How can such an appeal be objectionable? This is a meaningless point of contention. I don’t know whether the slogan, “teach the controversy” does indeed “deliberately” try “to make opponents look like they are against teaching ‘all’ of science to students.” That should not be the issue.

My position is this:

1. The slogan is harmless, and should be the motto of any individual or group interested in education and advancement of science. This should be a universally accepted ideal.

2. I fully believe and am entirely convinced that the mainstream scientific community does indeed adhere to censorship, and present a one-sided and therefore distorted portrayal of the facts and empirical data.

The fact remains that Intelligent Design is a scientifically fit theory that is about information, not designers.  ID is largely based upon the work of William Dembski, in which he introduced the concept of Complex Specified Information in 1998.  In 1996, biochemist Michael Behe championed the ID-inspired hypothesis of irreducible complexity.  It’s been 17 years since Behe made the predictions of irreducible complexity in his book, “Darwin’s Black Box,” and to this day the four proposed systems to be irreducibly complex have not yet been falsified after thorough examination by molecular biologists.  Those four biochemical systems are the blood-clotting cascade, bacterial flagellum, immune system, and the cilium.

The Wedge2

II. EXCEPT QUOTATIONS OF THE WEDGE ARE ALSO IRRELEVANT BECAUSE THE DISCOVERY INSTITUTE HAS ALREADY PROVIDED THEIR UPDATED REVISION OF THE DOC.

Please keep in mind that my initial concerns about complaints concerning the Wedge document are primarily based upon relevance.  The Discovery Institute repealed and amended the Wedge.  Additionally, the Discovery Institute added extra commentary to clarify their present position.  It’s interesting when I am presented links to the Wedge Document, it is often the updated revised draft.  This being so, then it makes it questionable as to why critics continue quoting from the former outdated and obsolete version.  It is quite a comical obsolete argument that goes against the complainant’s credibility.  In fact, it’s an exercise of the same intellectual dishonesty that the ID antagonists is accusing of the Discovery Institute.

If one desires to criticize the views of the Discovery Institute, then such a person must use the materials that they claim are the actual present position held by the Discovery Institute and ID proponents.  I would further add:

1. ID proponents repudiate the Wedge, and distance themselves from it.

2. Mr. Johnson who authored the Wedge is retired, and that the document is obsolete.

Much about Intelligent Design theory has nothing to do with ideology or religion, such as when ID is demonstrated as an applied science “Intelligent Design” is simply just another word for Bio-Design.  Aside from biomimicry and biomimetics, other areas of science overlap into the definition of ID Theory, such as Natural Genetic Engineering, quantum biology, bioinformatics, bio-inspired nanotechnology, selective breeding, biotechnology, genetic engineering, synthetic biology, bionics, prosthetic implants, to name a few. 

The Wedge
III. THOSE WHO RELY UPON THE WEDGE AND MOVIE EXPELLED ARGUMENTS AGAINST THE MOTIVES OF THE DISCOVERY INSTITUTE FAIL TO MEET THE RELEVANCE REQUIREMENT.

ID antagonists claim:

“The very conception of ‘Intelligent Design’ entails just how ‘secular’ and ‘scientific’ the group tried to make their ‘theory’ sound.  It was created with Christian intentions in mind.”

This is circular reasoning, which is a logic fallacy.  The idea just restates the opening thesis argument as the conclusion, and does nothing to support the conclusion.  It also does not overcome the relevance issue as to the views maintained by the Discovery Institute and ID advocates today.

There is no evidence offered by those who raise the Wedge complaint to connect a religious or ideological motive to ID advocacy. ID Theory must be provided the same opportunity to make predictions, and test a repeatable and falsifiable design-inspired hypothesis.  If anyone has a problem with this, then they own the burden of proof to show why ID scientists are disqualified from performing the scientific method.  In other words, to reject ID on the sole basis of the Wedge document is essentially unjustifiable discrimination based upon a difference of opinion of ideological views.   At the end of the day, the only way to falsify a scientific falsifiable scientific hypothesis is to run the experimentation, and use the empirical data to confirm a claim.

Intelligent Design can be expressed as a scientific theory.  Valid scientific predictions can be premised based upon a pro ID-inspired conjecture.  The issue is whether or not ID actually conforms to the scientific method. If it does, then the objection by ID opponents is without merit and irrelevant. If ID fails in scientific reasoning, then critics simply need to demonstrate that, and then they will be vindicated.  Otherwise, ID Theory remains a perfectly valid testable and falsifiable proposition regardless of its social issues.

So far, ID critics have not made any attempt to offer one solitary scientific argument or employ scientific reasoning as to the basis of ID Theory.

Posted in Uncategorized | Tagged , | Leave a comment

DOES EVOLUTION ALONE INCREASE INFORMATION IN A GENOME?

This is in response to the video entitled, “Evolution CAN Increase Information (Classroom Edition).”

I agree with the basic presentation of Shannon’s work in the video, along with its evaluation of Information Theory, the Information Theory definition of “information,” bits, noise, and redundancy.  I also accept the fact that new genes evolve, as described in the video. So far, so good.I have some objections to the video, including the underlying premise, which I consider to be a strawman.

To illustrate quantifying information into bits, Shannon referenced an attempt to receive a one-way radio/telephone transmission signal.

Before I outline my dissent, here’s what I think the problem is. This is likely the result of creationists hijacking work done by ID scientists, in this case William Dembski, and arguing against evolution using flawed reasoning that misrepresents ID scientists. I have no doubt that there are creationists who could benefit by watching this video and learn how they were mistaken in raising the argument the narrative in the video refutes. But, that flawed argument misinterprets Dembski’s writings.

ID Theory is grounded upon Dembski’s development in the field of informatics, based upon Shannon’s work. Dembski took Shannon Information further, and applied mathematical theorems to develop a special and unique concept of information called COMPLEX SPECIFIED INFORMATION (CSI), aka “Specified Information.” I have written about CSI in several blog articles, but this one is my most thorough discussion on CSI.

I often am guilty myself of describing the weakness of evolutionary theory to be based upon the inability to increase information. In fact, my exact line that I have probably said a hundred times over the last few years goes like this:

“Unlike evolution, which explains diversity and inheritance, ID Theory best explains complexity, and how information increases in the genome of a population leading to greater specified complexity.”

I agree with the author of this video script that my general statement is so overly broad that it is vague, and easily refuted because of specific instances when new genes evolve. Of course, of those examples, Nylonase is certainly an impressive adaptation to say the least.

But, I don’t stop there at my general comment to rest my case. I am ready to continue by clarifying what I mean when I talk about “information” in the context of ID Theory. The kind of “information” we are interested is CSI, which is both complex and specified. Now, there are many instances where biological complexity is specified, but Dembski was not ready to label these “design” until the improbability reaches the Universal Probability Bound of 1 x 10^–150. Such an event is unlikely to occur by chance. This is all in Dembski’s book, “The Design Inference” (1998).

According to ID scientists, CSI occurs early, in that it’s in the very molecular machinery required to comprise the first reproducing cell already in existed when life originated. The first cell already has its own genome, its own genes, and enough bits of information up front as a given for frameshift, deletion, insertion, and duplication types of mutations to occur. The information, noise, and redundancy required to make it possible for there to be mutations is part of the initial setup.

Dembski has long argued, which is essentially the crux of the No Free Lunch theorems, that neither evolution or genetic algorithms produce CSI.  Evolution only smuggles CSI forward. Evolution is the mechanism that includes the very mutations and process to increase the information as demonstrated in the video. But, according to ID scientists, the DNA, genes, start-up information, reproduction system, RNA replication, transcription, and protein folding equipment were there from the very start, and that the bits and materials required in order for the mutations to occur were front-loaded in advance. Evolution only carries it forward into fruition in the phenotype.  I discuss Dembski’s No Free Lunch more fully here.

DNA binary

Dembski wrote:

“Consider a spy who needs to determine the intentions of an enemy—whether that enemy intends to go to war or preserve the peace. The spy agrees with headquarters about what signal will indicate war and what signal will indicate peace. Let’s imagine that the spy will send headquarters a radio transmission and that each transmission takes the form of a bit string (i.e., a sequence of 0s and 1s ). The spy and headquarters might therefore agree that 0 means war and 1 means peace. But because noise along the communication channel might flip a 0 to a 1 and vice versa, it might be good to have some redundancy in the transmission. Thus the spy and headquarter s might agree that 000 represents war and 111 peace and that anything else will be regarded as a garbled transmission. Or perhaps they will agree to let 0 represent a dot and 1 a dash and let the spy communicate via Morse code in plain English whether the enemy plans to go to war or maintain peace.

“This example illustrates how information, in the sense of meaning, can remain constant whereas the vehicle for representing and transmitting this information can vary. In ordinary life we are concerned with meaning. If we are at headquarters, we want to know whether we’re going to war or staying at peace. Yet from the vantage of mathematical information theory, the only thing that’s important here is the mathematical properties of the linguistic expressions we use to represent the meaning. If we represent war with 000 as opposed to 0, we require three times as many bits to represent war, and so from the vantage of mathematical information theory we are utilizing three times as much information. The information content of 000 is three bits whereas that of 0 is just one bit.” [Source: Information-Theoretic Design Argument]

My main objection to the script is toward the end where the narrator, Shane Killian, states that if anyone has a different understanding of the definition of information, and prefers to challenge the strict definition that “information” is a reduction in uncertainty, that their rebuttal should be outright dismissed. I personally agree with Shannon, so I don’t have a problem with it, but there are other applications in computer science, bioinformatics, electrical engineering, and a host of other academic disciplines that have their own definitions of information that emphasize different dynamics than Shannon did.

Shannon made huge contributions to these fields, but his one-way radio/telephone transmission analogy is not the only way to understand the concept of information.  Shannon discusses these concepts in his 1948 paper on Information Theory.  Moreover, even though that Shannon’s work was the basis of Dembski’s work, ID Theory relates to the complexity and specificity of information, not just in quantification of “information” alone per se.

Claude Shannon is credited as the father and discoverer of Information Theory.

Posted in COMPLEX SPECIFIED INFORMATION (CSI), INFORMATION THEORY | Tagged , , , | Leave a comment

MICHAEL BEHE ON THE WITNESS STAND

As most people are aware, Michael Behe championed the design-inspired ID Theory hypothesis of Irreducible Complexity.  Michael Behe testified as an expert witness in Kitzmiller v. Dover (2005). 

behe-smile

Transcripts of all the testimony and proceedings of the Dover trial are available hereWhile under oath, he testified that his argument was:

“[T]hat the [scientific] literature has no detailed rigorous explanations for how complex biochemical systems could arise by a random mutation or natural selection.”

Behe was specifically referencing origin of life, molecular and cellular machinery. The cases in point were specifically the bacterial flagellum, cilia, blood-clotting cascade, and the immune system because that’s what Behe wrote about in his book, “Darwin’s Black Box” (1996).

The attorneys piled up a stack of publications regarding the evolution of the immune system just in front of Behe on the witness stand while he was under oath. Behe is criticized by anti-ID antagonists for dismissing the books.

Michael Behe testifies as an expert witness in Kitzmiller v. Dover. Illustration is by Steve Brodner, published in The New Yorker on Dec. 5, 2005.

The books were essentially how the immune system developed in vertebrates.  But, that isn’t what Intelligent Design theory is based upon. ID Theory is based upon complexity appearing at the outset of life when life first arose, and the complexity that appears during the Cambrian Explosion.

The biochemical structures Behe predicted to be irreducibly complex (bacterial flagellum, cilium, blood-clotting, and immune system) arose during the development of the first cell.  These biochemical systems occur at the molecular level in unicellular eukarya organisms, as evidenced by the fact that retroviruses are in the DNA of these most primitive life forms.  They are complex, highly conserved, and are irreducibly complex.  You can stack a mountain of books and scientific literature on top of this in re how these biochemical systems morphed from that juncture and forward into time, but that has nothing to do with the irreducible complexity of the original molecular machinery. 

The issue regarding irreducible complexity is the source of the original information that produced the irreducibly complex system in the first place.  The scientific literature on the immune system only addresses changes in the immune system after the system already existed and was in place.  For example, the Type III Secretion System Injector (T3SS) is often used to refute the irreducible complexity of flagellar bacteria.  But, the T3SS is not an evolutionary precursor of a bacteria flagella; it was derived subsequently and is evidence of a decrease in information.

The examining attorney, Eric Rothschild, stacked up those books one on top the other for courtroom theatrics.

Behe testified:

“These articles are excellent articles I assume. However, they do not address the question that I am posing. So it’s not that they aren’t good enough. It’s simply that they are addressed to a different subject.”

Those who reject ID Theory and dislike Michael Behe emphasize that since Behe is the one making the claim that the immune system is Irreducibly Complex, then Behe owns the burden to maintain a level of knowledge as what other scientists write on the subject.  It should be noted that there indeed has been a wealth of research on the immune system and the collective whole of the papers published gives us a picture of how the immune system evolved. But, the point that Behe made was there is very little knowledge available, if any, as to how the immune system first arose.

The burden was on the ACLU attorneys representing Kitzmiller to cure the defects of foundation and relevance. But, they never did. But, somehow anti-ID antagonists spin this around to make it look like somehow Behe was in the wrong here, which is entirely unfounded.  Michael Behe responded to the Dover opinion written by John E. Jones III hereOne comment in particular Behe had to say is this:

“I said in my testimony that the studies may have been fine as far as they went, but that they certainly did not present detailed, rigorous explanations for the evolution of the immune system by random mutation and natural selection — if they had, that knowledge would be reflected in more recent studies that I had had a chance to read.

In a live PowerPoint presentation, Behe had additional comments to make about how the opinion of judge John E. Jones III was not authored by the judge at all, but by an ACLU attorney.  You can see that lecture here.

Immunology

Piling up a stack of books in front of a witness without notice or providing a chance to review the literature before they can provide an educated comment has no value other than courtroom theatrics.

The subject was clear that the issue was biological complexity appearing suddenly at the dawn of life. Behe had no burden to go on a fishing expedition through that material. It was up to the examining attorney to direct Behe’s attention to the specific topic and ask direct questions. But, the attorney never did that.

One of the members on the opposition for Kitzmiller is Nicholas Matzke, who is employed by the NCSEThe NCSE was originally called upon early by the Kitzmiller plaintiffs, and later the ACLU retained to represent Kitzmiller.  Nick Matzke had been handling the evolution curriculum conflict at Dover as early as the summer of 2004.  Matzke tells the story as to how he worked with Barbara Forrest, on the history of ID, and with Kenneth Miller, their anti-Behe expert.  Matzke writes,

“Eric Rothschild and I knew that defense expert Michael Behe was the scientific centerpoint of the whole case — if Behe was found to be credible, then the defense had at least a chance of prevailing. But if we could debunk Behe and the “irreducible complexity” argument — the best argument that ID had — then the defense’s positive case would be sunk.”

Matzke offered guidance on the deposition questions for Michael Behe and Scott Minnich, and was present when Behe and Minnich were deposed.  When Eric Rothschild, the attorney who cross-examined Behe in the trial, flew out to Berkeley for Kevin Padian’s deposition, the NCSE discussed with Rothschild how to deal with Behe.  Matzke describes their strategy:

“One key result was convincing Rothschild that Behe’s biggest weakness was the evolution of the immune system. This developed into the “immune system episode” of the Behe cross-examination at trial, where we stacked up books and articles on the evolution of the immune system on Behe’s witness stand, and he dismissed them all with a wave of his hand.”

It should be noted that as detailed and involved as the topic on the evolution of the vertebrate immune system is, the fact remains that to this day Michael Behe’s 1996 prediction that the immune system is irreducibly complex has not yet been falsified even though it is very much falsifiable.  I had the opportunity to personally debate Nick Matzke on this very issue myself.  The Facebook thread in which this discussion took place is here, in the ID group called Intelligent Design – Official Page.

Again, to repeat the point I made above in regarding the courtroom theatrics with the stacking of the pile of books in front of Behe, the burden was not on Behe to sift through the material to find evidence that would support Kitzmiller. It is up to the ACLU attorneys to direct Behe’s attention in those books and publications where complex biochemical life and the immune system first arose, and then ask questions specifically to that topic. But, since Behe was correct in that the material was not responsive to the issue in the examination, there was nothing left for the attorneys to do except engage in theatrics.

There is also a related Facebook discussion thread regarding this topic.

Posted in IRREDUCIBLE COMPLEXITY, KITZMILLER V. DOVER AND LEGAL ISSUES | Tagged , , | 2 Comments

Response to Claim That ID Theory Is An Argument from Incredulity

The Contention That Intelligent Design Theory Succumbs To A Logic Fallacy:

It is argued by those who object to the validity of ID Theory that the proposition of design in nature is an argument from ignorance.   There is no validity to this unfounded claim because design in nature is well-established by the work of William Dembski.  For example, here is a database of writings of Dembski: http://designinference.com/dembski-on-intelligent-design/dembski-writings/. Not only are the writings of Dembski peer-reviewed and published, but so are rebuttals that were written in response of his work.  Dembski is the person who coined the phrase Complex Specified Information, and how it is convincing evidence for design in nature.

Informal Fallacy

The Alleged Gap Argument Problem With Irreducible Complexity:

The argument from ignorance allegation against ID Theory is based upon the design-inspired hypothesis championed by Michael Behe, which is known as Irreducible Complexity. It is erroneous to claim ID is based upon an argument from incredulity* because ID Theory makes no appeals to the unobservable, supernatural, paranormal, or anything that is metaphysical or outside the scope of science.  However, the assertion that the Irreducible Complexity hypothesis is a gap argument is a valid objection that does need a closer view to determine if the criticism of irreducible complexity is valid.

An irreducibly complex system is one that (a) the removal of a protein renders the molecular machine inoperable, and (b) the biochemical structure has no stepwise evolutionary pathway.

Here’s how one would set up examination by using gene knockout, reverse engineering, study of homology, and genome sequencing:

I. To CONFIRM Irreducible Complexity:

Show:

1. The molecular machine fails to operate upon the removal of a protein.

AND,

2. The biochemical structure has no evolutionary precursor.

II. To FALSIFY Irreducible Complexity:

Show:

1. The molecular machine still functions upon loss of a protein.

OR,

2. The biochemical structure DOES have an evolutionary pathway.

The 2 qualifiers make falsification easier, and confirmation more difficult.

Those who object to irreducible complexity often raise the argument that the irreducible complexity hypothesis is based upon there being gaps or negative evidence.   Such critics claim that irreducible complexity is not based upon affirmative evidence, but on a lack of evidence, and as such, irreducible complexity is a gap argument, also known as an argument from ignorance.  However, this assertion that irreducible complexity is nothing other than a gap argument is false.

According to the definition of irreducible complexity, the hypothesis can be falsified EITHER way, by (a) demonstrating the biochemical system still performs its original function upon the removal of any gene that makes up its parts, or (b) showing that there are missing mutations that were skipped, i.e., there is no stepwise evolutionary pathway or precursor.  Irreducible complexity can still be falsified even if no evolutionary precursor is found because of the functionality qualifier.   In other words, the mere fact that there is no stepwise evolutionary pathway does not automatically mean that the system is irreducibly complex.  To confirm irreducible complexity, BOTH qualifiers must be satisfied.  But, it only takes one of the qualifiers to falsify irreducible complexity.  As such, the claim that irreducible complexity is fatally tied to a gap argument is without merit.

It is true that there very much exists a legitimate logic fallacy known as proving a negative.  The question is whether there is such a thing as proving nonexistence. It’s a logic fallacy. While it is true that it is impossible to prove a negative or provide negative proof, it is very much logically valid to limit a search to find a target within a reasonable search space and obtain a quantity of zero as a scientifically valid answer.

Solving a logic problem might be a challenged, but there is a methodical procedure that will lead to success.  The cure to the logic fallacy, is to correct the error and solve the problem.

Solving a logic problem might be a challenge, but there is a methodical procedure that will lead to success. The cure to a logic fallacy, is to simply correct the error and solve the problem.

The reason why the irreducible complexity hypothesis is logically valid is because there is no attempt to base the prediction that certain biochemical molecular machinery are irreducibly complex based upon absence of evidenceIf this were so, then the critics would be correct.  But, this is not the case.  Instead, the irreducible complexity hypothesis requires research, such as various procedures in molecular biology as (a) gene knockout, (b) reverse engineering, (c) examining homologous systems, and (d) sequencing the genome of the biochemical structure.  The gene knockout procedure was used by Scott Minnich in 2004-2005 to show that the removal of any of the proteins of a bacterial flagellum will render that bacteria incapable of motility (can’t swim anymore).  Michael Behe also mentions (e) yet another way as to how testing irreducible complexity using gene knockout procedure might falsify the hypothesis here.

When the hypothesis of irreducible complexity is tested in the lab using any of the procedures directly noted above, an obvious thorough investigation is conducted that demonstrates evidence of absence. There is a huge difference between absence of evidence and evidence of absence.  One is a logic fallacy while the other is an empirically generated result, a scientifically valid quantity that is concluded upon thorough examination.  So, depending upon the analysis, you can prove a negative.

Evidence of Absence

Here’s an excellent example as to why irreducible complexity is logically valid, and not an argument from ignorance.  If I were to ask you if you had change for a dollar, you could say, “Sorry, I don’t have any change.” If you make a diligent search in your pockets to discover there are indeed no coins anywhere to be found on your person, then you have affirmatively proven a negative that your pockets were empty of any loose change. Confirming that you had no change in your pockets was not an argument from ignorance because you conducted a thorough examination and found it to be an affirmatively true statement.

The term, irreducible complexity, was coined by Michael Behe in his book, “Darwin’s Black Box” (1996).  In that book, Behe predicted that certain biochemical systems would be found to be irreducibly complex.  Those specific systems were (a) the bacterial flagellum, (b) cilium, (c) blood-clotting cascade, and (d) immune system.   It’s now 2013 at the time of writing this essay.  For 17 years, the research has been conducted, and the flagellum has been shown to be irreducibly complex. It’s been thoroughly researched, reverse engineered, and its genome sequenced. It is a scientific fact that the flagellum has no precursor. That’s not a guess. It is not stated as ignorance from taking some wild uneducated guess. It’s not a tossing one’s hands up in the air saying, “I give up.” It is a scientific conclusion based upon thorough examination.

Logic Fallacies

Logic fallacies, such as circular reasoning, argument from ignorance, red herring, strawman argument, special pleading, and others are based upon philosophy and rhetoric. While they might lend to the merit of a scientific conclusion, it is up to the peer-review process to determine the validity of a scientific hypothesis.

Again, if you were asked how much change do you have in your pockets. You can put your hand in your pocket, look to see how many coins are there. If there is no loose change, it is NOT an argument from ignorance to state, “Sorry, I don’t have any spare change.” You didn’t guess. You stuck your hands in your pockets and looked, and scientifically deduced the quantity to be zero. The same is true with irreducible complexity. After the search has taken place, the prediction the biochemical system is irreducibly complex is upheld and verified. Hence, there is no argument from ignorance.

The accusation that irreducible complexity is an argument from ignorance essentially suggests a surrender and abandonment of ever attempting to empirically determine whether the prediction is scientifically correct.  It’s absurd for anyone to suggest that ID scientists are not interested in finding Darwinian mechanisms responsible for the evolution of an irreducible complex biochemical structure. If you lost money in your wallet, it would be ridiculous for someone to accuse you of rejecting any interest in recovering your money. That’s essentially what is being claimed when someone draws the argument from ignorance accusation. The fact is you know you did look (you might have turned your house upside down looking), and know for a fact that the money is missing. It doesn’t mean that you might still find it (the premise is still falsifiable). But, a thorough examination took place, and you determined the money is gone.

Consider Mysterious Roving Rocks:

On a sun-scorched plateau known as Racetrack Playa in Death Valley, California, rocks of all sizes glide across the desert floor.  Some of the rocks accompany each other in pairs, which creates parallel trails even when turning corners so that the tracks left behind resemble those of an automobile.  Other rocks travel solo the distance of hundreds of meters back and forth along the same track.  Sometimes these paths lead to its stone vehicle, while other trails lead to nowhere, as the marking instrument has vanished.

Roving Rocks

Some of these rocks weigh several hundred pounds. That makes the question: “How do they move?” a very challenging one.  The truth is no one knows just exactly how these rocks move.   No one has ever seen them in motion.  So, how is this phenomenon explained?

A few people have reported seeing Racetrack Playa covered by a thin layer of ice. One idea is that water freezes around the rocks and then wind, blowing across the top of the ice, drags the ice sheet with its embedded rocks across the surface of the playa.  Some researchers have found highly congruent trails on multiple rocks that strongly support this movement theory.  Other suggest wind to be the energy source behind the movement of the roving rocks.

The point is that anyone’s guess, prediction, speculation is as good as that of anyone else.  All these predictions are testable and falsifiable by simply setting up instrumentation to monitor the movements of the rocks.  Are any of these predictions an argument from ignorance?  No.  As long as the inquisitive examiner makes an effort to determine the answer, this is a perfectly valid scientific endeavor. 

The argument from ignorance would only apply when someone gives up, and just draws a conclusion without any further attempt to gain empirical data.  It is not a logic fallacy in and of itself on the sole basis that there is a gap of knowledge as to how the rocks moved from Point A to Point B.  The only logic fallacy would be to draw a conclusion while resisting further examination.  Such is not the case with irreducible complexity.  The hypothesis has endured 17 years of laboratory research by molecular biologists, and the research continues to this very day.

The Logic Fallacy Has No Bearing On Falsifiability:

Here’s yet another example as to why irreducible complexity is scientifically falsifiable, and therefore not an argument from ignorance logic fallacy.  If someone was correct in asserting the argument from incredulity fallacy, then they have eliminated all science. Newton’s law of gravity was an argument from ignorance because he didn’t know anything more than what he had discovered. It was later falsified by Einstein. So, according to this flawed logic, Einstein’s theory of relativity is an argument from ignorance because there might be someone in the future who will falsify it with a Theory of Everything.

Whether a hypothesis passes the Argument of Ignorance logic criterion, or not, the argument is an entirely philosophical one, much like how a mathematical argument might be asserted.  If the argument from ignorance were applied in peer-review to all science papers submitted for publication, the science journals would be near empty of any documents to reference.  Science is not based upon philosophical objections and arguments.  Science is based upon the definition of science, which is observation, falsifiable hypothesis, experimentation, results and conclusion. It is the fact that these methodical elements are in place which makes science based upon what it is supposed to be, and that is empiricism.

Scientific Method

Whether a scientific hypothesis is falsifiable is not affected by philosophical arguments based upon logic fallacies.   Irreducible Complexity is very much falsifiable based upon its definition.  The argument from ignorance only attacks the significance of the results and conclusion of research in irreducible complexity; it doesn’t deter irreducible complexity from being falsifiable.  In fact, the argument from ignorance objection actually emphasizes just the opposite, that irreducible complexity might be falsified tomorrow because it inherently argues the optimism that its just a matter of time that an evolutionary pathway will be discovered in future research.  This is not a bad thing; the fact that irreducible complexity is falsifiable is a good thing.  That testability and obtainable goalpost is what you want in a scientific hypothesis.

ID Theory Is Much More Than Just The One Hypothesis of Irreducible Complexity:

ID Theory is also an applied science as well, click here for examples in biomimicry.  Intelligent Design is also an applied science in areas of bioengineering, nanotechnology, selective breeding, and bioinformatics, to name a few applications.  ID Theory is a study of information and design in nature.  And, there are design-inspired conjectures as to where the source of information originates, such as the rapidly growing field of quantum biology, Natural Genetic Engineering, and front-loading via panspermia.

In conclusion, the prediction that there are certain biochemical systems that exist of which are irreducibly complex is not a gaps argument.  The definition of irreducible complexity is stated above, and it is very much a testable, repeatable, and falsifiable hypothesis.  It is a prediction that certain molecular machinery will not operate upon the removal of a part, and have no stepwise evolutionary precursor.  This was predicted by Behe 17 years ago, and still remains true, as evidenced by the bacteria flagellum, as an example.

*  Even though these two are technically distinguishable logic fallacies, the argument from incredulity is so similar to the argument from ignorance that for purposes of discussion I treat the terms as synonymous.

Posted in LOGIC FALLACIES | Tagged , , , , | Leave a comment

RESPONSE TO THE MARK PERAKH CRITIQUE, “THERE IS A FREE LUNCH AFTER ALL: WILLIAM DEMBSKI’S WRONG ANSWERS TO IRRELEVANT QUESTIONS”

I. INTRODUCTION

This essay is a reply to chapter 11 of the book authored by Mark Perakh entitled, Why Intelligent Design Fails: A Scientific Critique of the New Creationism (2004).  The chapter can be review here.  Chapter 11, “There is a Free Lunch After All: William Dembski’s Wrong Answers to Irrelevant Questions,” is a rebuttal to the book written by William Dembski entitled, No Free Lunch (2002).  Mark Perakh’s also authored another anti-ID book, “Unintelligent Design.”  The Discovery Institute replied to Perakh’s work here.

The book by William Dembski, No Free Lunch (2002) is a sequel to his classic, The Design Inference (1998). The Design Inference used mathematical theorems to define design in terms of a chance and statistical improbability.  In The Design Inference, Dembski explains complexity, and demonstrated that when complex information is specified, it determines design.  Simply put, Complex Specified Information (CSI) = design.  It’s CSI that is the technical term that mathematicians, information theorists, and ID scientists can work with to determine whether some phenomenon or complex pattern is designed.

One of the most important contributors to ID Theory is American mathematician Claude Shannon, who is considered to be the father of Information Theory. Essentially, ID Theory is a sub-theory of Information Theory in the field of Bioinformatics. This is one of Dembski’s areas of expertise.

Claude Shannon is seen here with Theseus, his magnetic mouse. The mouse was designed to search through the corridors until it found the target.

Claude Shannon pioneered the foundations for modern Information Theory.  His identifying units of information that can be quantified and applied in fields such as computer science is still called Shannon Information to this day.

Shannon invented a mouse that was programmed to navigate through a maze to search for a target, concepts that are integral to Dembski’s mathematical theorems of which are based upon Information Theory.  Once the mouse solved the maze it could be placed anywhere it had been before and use its prior experience to go directly to the target. If placed in unfamiliar territory, the mouse would continue the search until it reached a known location and then proceed to the target.  The ability of the device to add new knowledge to its memory is believed to be the first occurrence of artificial learning.

In 1950 Shannon published a paper on computer chess entitled Programming a Computer for Playing Chess. It describes how a machine or computer could be made to play a reasonable game of chess. His process for having the computer decide on which move to make is a minimax procedure, based on an evaluation function of a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position. Material was counted according to the usual relative chess piece relative value. (http://en.wikipedia.org/wiki/Claude_Shannon).

Shannon’s work obviously involved applying what he knew at the time for the computer program to scan all possibilities for any given configuration on the chess board to determine the best optimum move to make.  As you will see, this application of a search within any given phase space that might occur during the course of the game for a target, which is one fitness function among many as characterized in computer chess is exactly what the debate is about with Dembski’s No Free Lunch (NFL) Theorems.

When Robert Deyes wrote a review on Stephen Meyer’s “Signature In The Cell,” he noted “When talking about ‘information’ and its relevance to biological design, Intelligent Design theorists have a particular definition in mind.”  Stephen Meyer explained in “Signature In The Cell” that information is: “the attribute inherent in and communicated by alternative sequences or arrangements of something that produce specific effects” (p.86).

When Shannon unveiled his theory for quantifying information it included several axioms, one of which is information is inversely proportional to uncertainty. Similarly, design can be contrasted to chance.

II. COMPLEX SPECIFIED INFORMATION (CSI):

CSI is based upon the theorem:

sp(E) and SP(E)  D(E)

When a small probably (SP) event (E) is complex, and

SP(E) = [P (E|I) < the Universal Probability Bound]. Or, in English, we know an event E is a small probably event when the probability of event E given I is less than the Universal Probability Bound. I = All relevant side information and all stochastic hypotheses. This is all in Dembski’s book, The Design Inference.

An event E is specified by a pattern independent of E, or expressed mathematically: sp(E). Upper case letters SP(E) are the small probability event we are attempting to determine is CSI, or designed. Lower case letters sp(E) are a prediction that we will discover the SP(E). Therefore, if sp(E) and SP(E) then  D(E). D(E) means the event E is not only small probability, but we can conclude it is designed.

Dembski’s Universal Probability Bound = 0.5 x 10-150, or 0.5 times 10 to the exponent negative 150 power. This is the magic number when one can scientifically be justified to invoke design. It’s been said that using Dembski’s formula, the probability that Dembski states must be matched in order to ascribe design is to announce in advance before dealing that you are going to be dealt 24 Royal Flushes in a row, and then the event plays out just exactly as the advance forecast. In other words, just as intelligence might be entirely illusory, so likewise CSI is nothing other than a mathematical ratio that might not have anything in the world to do with actual design.

The probability of dealing a Royal flush given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement is 649,739 to 1. According to Dembski, if someone were to be dealt a Royal Flush 24 times in a row upon the advance announcement predicting such a happening would take place, his contentions would be that it was so improbable that someone cheated, or “design” would have had to been involved.

The probability of being dealt a Royal flush given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement is 649,739 to 1.

I’m oversimplifying CSI just for the sake of making this point that we have already imposed upon the term “design” a technical definition that requires no intelligence or design as we understand the everyday normal use of the words.  What’s important is that just as improbable as it is to be dealt a Royal Flush, so likewise the level of difficulty natural selection is up against to produce what appears to be designed in nature. And, when CSI is observed in nature, which occurs occasionally, then that not only confirms ID predictions, and defies Darwinian gradualism, but also tips a scientist a clue that such might be evidence of additional ID-related mechanisms at work.

It is true that William Dembski’s theorems are based upon an assumption that we can quantify everything in the universe; no argument there. But, he only used that logic to derive his Universal Probability Bound, which is nearly an infinitesimally small number: 0.5 x 10-150, or 0.5 times 10 to the exponent negative 150 power. Do you not think that when a probability is this low that it is a safe bet to invoke corruption of natural processes by an intelligent agency? The number is a useful number.

I wrote two essays on CSI to provide a better understanding of specified complexity introduced in Dembski’s book, The Design Inference.  In this book, Dembski introduces and expands on the meaning of CSI, and then proceeds to present reasoning as to why CSI infers design.  The first essay I wrote on CSI here is an elementary introduction to the overall concept.  I wrote a second essay here that provides a more advances discussion on CSI.

CSI does show up in nature. That’s the whole point of the No Free Lunch Principle is that there is no way by which evolution can take credit for the occurrences when CSI shows up in nature.

III. NO FREE LUNCH

Basically, the book, “No Free Lunch” is a sequel to the earlier work, The Design Inference. While we get more calculations that confirm and verify Dembski’s earlier work, we also get new assertions made by Dembski. It is very important to note that ID Theory is based upon CSI that is established in The Design Inference. The main benefit of the second book, “No Free Lunch,” is that it further validates and verifies CSI, which was established in The Design Inference.  The importance of this fact cannot be overemphasized. Additionally, “No Free Lunch” further confirms the validity of the assertion that design in inseparable from intelligence.

Before “No Free Lunch,” there was little effort demonstrating that CSI is connected to intelligence. That’s a problem because CSI = design. So, if CSI = design, it should be demonstrable that CSI correlates and is directly proportional to intelligence. This is the thesis of what the book, “No Free Lunch” sets out to do. If “No Free Lunch” fails to successfully support the thesis that CSI correlates to intelligence, that would not necessarily impair ID Theory, but if Dembski succeeds, then it would all the more lend credibility to ID Theory and certainly all of Dembski’s work as well.

IV. PERAKH’S ARGUMENT

The outline of Perakh’s critique of Dembski’s No Free Lunch theorems is as follows:

1.    Methinks It Is like a Weasel—Again
2.    Is Specified Complexity Smuggled into Evolutionary Algorithms?
3.    Targetless Evolutionary Algorithms
4.    The No Free Lunch Theorems
5.    The NFL Theorems—Still with No Mathematics
6.    The No Free Lunch Theorems—A Little Mathematics
7.    The Displacement Problem
8.    The Irrelevance of the NFL Theorems
9.    The Displacement “Problem”

1.  METHINKS IT IS LIKE A WEASEL – AGAIN

One common demonstration to help people understand how CSI works is to take a letter sequence. This can be done with anything, but the common example is this pattern:

METHINKS•IT•IS•LIKE•A•WEASEL

This letter arrangement is used the most often to describe CSI because the math has already been worked out. The bullets represent spaces. There are 27 possibilities at each location in a symbol string 28 characters in length. If natural selection were entirely random it would take 1 x 1040 (that’s 10 to the 40th exponent, or 1 with 40 zeroes to get to the decimal point). It’s a small probability. However, natural selection (NS) is smarter than that, and Richard Dawkins has shown how NS achieves similar odds in nature in an impressive 43 attempts to get the answer correct, as Dembski notes here.

In this example, the probability was only 1 x 1040. CSI is an even much more higher number than that. If you take a pattern or model, such as METHINKS•IT•IS•LIKE•A•WEASEL, and you keep adding the information, you soon reach probabilities that are within the domain of CSI.

Dembski’s explanation to the target sequence of METHINKS•IT•IS•LIKE•A•WEASEL is as follows:

“Thus, in place of 1040 tries on average for pure chance to produce the target sequence, by employing the Darwinian mechanism it now takes on average less than 100 tries to produce it. In short, a search effectively impossible for pure chance becomes eminently feasible for the Darwinian mechanism.

“So does Dawkins’s evolutionary algorithm demonstrate the power of the Darwinian mechanism to create biological information? No. Clearly, the algorithm was stacked to produce the outcome Dawkins was after. Indeed, because the algorithm was constantly gauging the degree of difference between the current sequence from the target sequence, the very thing that the algorithm was supposed to create (i.e., the target sequence METHINKS•IT•IS•LIKE•A•WEASEL) was in fact smuggled into the algorithm from the start. The Darwinian mechanism, if it is to possess the power to create biological information, cannot merely veil and then unveil existing information. Rather, it must create novel information from scratch. Clearly, Dawkins’s algorithm does nothing of the sort.

“Ironically, though Dawkins uses a targeted search to illustrate the power of the Darwinian mechanism, he denies that this mechanism, as it operates in biological evolution (and thus outside a computer simulation), constitutes a targeted search. Thus, after giving his METHINKS•IT•IS•LIKE•A•WEASEL illustration, he immediately adds: “Life isn’t like that.  Evolution has no long-term goal. There is no long-distant target, no final perfection to serve as a criterion for selection.” [Footnote] Dawkins here fails to distinguish two equally valid and relevant ways of understanding targets: (i) targets as humanly constructed patterns that we arbitrarily impose on things in light of our needs and interests and (ii) targets as patterns that exist independently of us and therefore regardless of our needs and interests. In other words, targets can be extrinsic (i.e., imposed on things from outside) or intrinsic (i.e., inherent in things as such).

“In the field of evolutionary computing (to which Dawkins’s METHINKS•IT•IS•LIKE•A•WEASEL example belongs), targets are given extrinsically by programmers who attempt to solve problems of their choice and preference. Yet in biology, living forms have come about without our choice or preference. No human has imposed biological targets on nature. But the fact that things can be alive and functional in only certain ways and not in others indicates that nature sets her own targets. The targets of biology, we might say, are “natural kinds” (to borrow a term from philosophy). There are only so many ways that matter can be configured to be alive and, once alive, only so many ways it can be configured to serve different biological functions. Most of the ways open to evolution (chemical as well as biological evolution) are dead ends. Evolution may therefore be characterized as the search for alternative “live ends.” In other words, viability and functionality, by facilitating survival and reproduction, set the targets of evolutionary biology. Evolution, despite Dawkins’s denials, is therefore a targeted search after all.” (http://evoinfo.org/papers/ConsInfo_NoN.pdf).

Weasel Graph

This graph was presented by a blogger who ran just one run of the weasel algorithm for Fitness of “best match” for n = 100 and u = 0.2.

Perakh doesn’t make any argument here, but introduces the METHINKS IT IS LIKE A WEASEL configuration here to be the initial focus of what is to follow.  The only derogatory comment he makes with Dembski is to charge that Dembski is “inconsistent.”  But, there’s no excuse to accuse Dembski of any contradiction. Perakh  states himself, “Evolutionary algorithms may be both targeted and targetless” (Page 2).  He also admits that Dembski was correct in that “Searching for a target IS teleological” (Page 2).  Yet, Perakh blames Dembski to be at fault for simply noting the teleological inference, and falsely accuses Dembski of contradicting himself on this issue when there is no contradiction.  There’s no excuse for Perakh to accuse Dembksi is being inconsistent here when all he did was acknowledge that teleology should be noted and taken into account when discussing the subject.

Perakh also states on page 3 that Dembski lamented over the observation made by Dawkins.  This is unfounded rhetoric and ad hominem that does nothing to support Perakh’s claims.  There is no basis to assert or benefit to gain by suggesting that Dembski was emotionally dismayed because of the observations made by Dawkins.  The issue is a talking point for discussion.

Perakh correctly represents the fact, “While the meaningful sequence METHINKSITISLIKEAWEASEL is both complex and specified, a sequence NDEIRUABFDMOJHRINKE of the same length, which is gibberish, is complex but not specified” (page 4).  And, then he correctly reasons the following,

“If, though, the target sequence is meaningless, then, according to the above quotation from Behe, it possesses no SC. If the target phrase possesses no SC, then obviously no SC had to be “smuggled” into the algorithm.” Hence, if we follow Dembski’s ideas consistently, we have to conclude that the same algorithm “smuggles” SC if the target is meaningful but does not smuggle it if the target is gibberish.” (Emphasis in original, page 4)

Perakh then arrives at the illogical conclusion that such reasoning is “preposterous because algorithms are indifferent to the distinction between meaningful and gibberish targets.”  Perakh is correct that algorithms are indifferent to teleology and making distinctions.  But, he has no basis to criticize Dembski on this point.

Completed Jigsaw Puzzle

This 40-piece jigsaw puzzle is more complex than the Weasel problem that consists of only the letters M, E, T, H, I, N, K, S, L, A, W, S, plus a space.

In the Weasel problem submitted by Richard Dawkins, the solution (target) was provided to the computer up front.  The solution to the puzzle was embedded in the letters provided to the computer to arrange into an intelligible sentence.  The same analogy applies to a jigsaw puzzle.  There is only one end result picture the puzzle pieces can be assembled to achieve.  The information of the picture is embedded in the pieces and not lost from merely cutting the image picture into pieces.  One can still solve the puzzle if they are blinded up front from seeing what the target looks like.   There is only one solution to the Weasel problem, so it is a matter of deduction, and not a blind search as Perakh maintains.   The task the Weasel algorithm had to perform was to unscramble the letters and rearrange them in the correct sequence.

The METHINKS•IT•IS•LIKE•A•WEASEL algorithm was given up front to be the fitness function, and intentionally designed CSI to begin with.  It’s a matter of the definition of specified complexity (SC).  If information is both complex and specified, then it is CSI by definition, and CSI = SC.  They’re two ways to express the same identical concept.  Perakh is correct.  The algorithm has nothing in and of itself to do with the specified complexity of the target phrase.  The reason why a target phrase is specified complexity is because the complex pattern was specified up front to be the target in the first place, all of which was independent of the algorithm.  So, so far, Perakh has not made a point of argument yet.

Dembski makes subsequent comments about the weasel math here and here.

2.  IS SPECIFIED COMPLEXITY SMUGGLED INTO EVOLUTIONARY ALGORITHMS?

Perakh asserts on page 4 that “Dembski’s modified algorithm is as teleological as Dawkins’s original algorithm.”  So what?  This is a pointless red herring that Perakh continues work for no benefit or support of any argument against Dembski.  It’s essentially a non-argument.  All sides: Dembski, Dawkins, and Perakh himself have conceded up front that discussion of this topic is difficult without stumbling over anthropomorphism.  Dembski noted it up front, which is commendable; but somehow Perakh wrongfully tags this to be some fallacy of which Dembski is committing.

Personifying the algorithms to have teleological behavior was a fallacy noted up front.  So, there’s no basis for Perakh to allege that Dembski is somehow misapplying any logic in his discussion.  The point was acknowledged by all participants in the discussion from the very beginning.  Perakh is not inserting anything new here, but just being an annoyance to raise a point that was already noted.  Also, Perakh has yet to actually raise any actual argument yet.

Dembksi wrote in No Free Lunch (194–196) that evolutionary algorithms do not generate CSI, but can only “smuggle” it from a “higher order phase space.”  CSI is also called specified complexity (SC).   Perakh makes the ridiculous claim on page 4 that this point is irrelevant to biological evolution, but offers no reasoning as to why.  To support his challenge against Dembski, Perakh states, “since biological evolution has no long-term target, it requires no injection of SC.”

The question is whether it’s possible a biological algorithm caused the existence of the CSI.  Dembski says yes, and his theorems established in The Design Inference are enough to satisfy the claim.  But, Perakh is arguing here that the genetic algorithm is capable of generating the CSI.  Perakh states that natural selection is unaware of its result (page 4), which is true.  Then he says Dembski must, “offer evidence that extraneous information must be injected into the natural selection algorithm apart from that supplied by the fitness functions that arise naturally in the biosphere.”  Dembski shows this in “Life’s Conservation Law – Why Darwinian Evolution Cannot Create Biological Information.”

3.  TARGETLESS EVOLUTIONARY ALGORITHMS

Biomorphs

Biomorphs

Next, Perakh raises the example made by Richard Dawkins in “The Blind Watchmaker” in which Dawkins uses what he calls “biomorphs” as an argument against artificial selection.  While Dawkins exhibits an imaginative jab to ridicule ID Theory, raising the subject again by Perakh is pointless.  Dawkins used the illustration of biomorphs to contrast the difference between natural selection as opposed to artificial selection upon which ID Theory is based upon.  It’s an excellent example.  I commend Dawkins on coming up with these biomorph algorithms.  They are very unique and original.  You can see color examples of them here.

The biomorphs created by Dawkins are actually different intersecting lines of various degrees of complexity, and resemble Rorschach figures often used by psychologists and psychiatrists.  Biomorphs depict both inanimate objects like a cradle and lamp, plus biological forms such as a scorpion, spider, and bat.   It is an entire departure from evolution as it is impossible to make any logical connection how a fox would evolve into a lunar lander, or how a tree frog would morph into a precision balance scale.  Since the idea is a departure from evolutionary logic of any kind, because no rationale to connect any of the forms is provided, it would be seemingly impossible to devise an algorithm that fits biomorphs.

Essentially, Dawkins used these biomorphs to propose a metaphysical conjecture.  The intent of Dawkins is to suggest ID Theory is a metaphysical contemplation while natural selection is entirely logical reality.  Dawkins explains the point in raising the idea of biomorphs is:

“… when we are prevented from making a journey in reality, the imagination is not a bad substitute. For those, like me, who are not mathematicians, the computer can be a powerful friend to the imagination. Like mathematics, it doesn’t only stretch the imagination. It also disciplines and controls it.”

Biomorphs submitted by Richard Dawkins from The Blind Watchmaker, figure 5 p. 61

This is an excellent point and well-taken. The idea Dawkins had to reference biomorphs in the discussion was brilliant.  Biomorphs are an excellent means to assist in helping someone distinguish the difference between natural selection verses artificial selection.  This is exactly the same point design theorists make when protesting the personification of natural selection to achieve reality-defying accomplishments.  What we can conclude is that scientists, regardless of whether they accept or reject ID Theory, dislike the invention of fiction to fill in unknown gaps of phenomena.

In the case of ID Theory, yes the theory of intelligent design is based upon artificial selection, just as Dawkins notes with his biomorphs.  But, unlike biomorphs and the claim of Dawkins, ID Theory still is based upon fully natural scientific conjectures.

4.  THE NO FREE LUNCH THEOREMS

In this section of the argument, Perakh doesn’t provide an argument.  He’s more interested in talking about his hobby, which is mountain climbing.

The premise offered by Dembski that Perakh seeks to refute is the statement in No Free Lunch, which reads, “The No Free Lunch theorems show that for evolutionary algorithms to output CSI they had first to receive a prior input of CSI.” (No Free Lunch, page 223).  Somehow, Perakh believes he can prove Dembski’s theorems false.  In order to accomplish the task, one would have to analyze Dembski’s theorems.  First of all, Dembski’s theorems take into account all the possible factors and variables that might apply, as opposed to the algorithms only.  Perakh doesn’t make anything close to such an evaluation.  Instead, Perakh does nothing but use the mountain climbing analogy to demonstrate we cannot know just exactly what algorithm natural selection will promote as opposed to which algorithms natural selection will overlook.  This fact is a given up front and not in dispute.  As such, Perakh presents a non-argument here that does nothing to challenge Dembski’s theorems in the slightest trace of a bit.  Perakh doesn’t even discuss the theorems, let alone refute them.

The whole idea here of the No Free Lunch theorems is to demonstrate how CSI is smuggled across many generations, and then shows up visibly in a phenotype of a life form countless generations later.  Many factors must be contemplated in this process including evolutionary algorithms.   Dembksi’s book, No Free Lunch, is about demonstrating how CSI is smuggled through, which is the whole point as to where the book’s name is derived.  If CSI is not manufactured by evolutionary processes, including genetic algorithms, then it had been displaced from the time it was initially front-loaded.  Hence, there’s no free lunch.

Front-Loading could be achieved several ways, one of which is via panspermia.

But, Perakh makes no attempt to discuss the theorems in this section, much less refute Dembski’s work.  I’ll discuss front-loading in the Conclusion.

5.  THE NO FREE LUNCH THEOREMS—STILL WITH NO MATHEMATICS

Perakh finally makes a valid point here.  He highlights a weakness in Dembski’s book that the calculations provided do little to account for an average performance of multiple algorithms in operation at the same time.

Referencing his mountain climbing analogy from the previous section, Perakh explains the fitness function is the height of peaks in a specific mountainous region.  In his example he designates the target of the search to be a specific peak P of height 6,000 meters above sea level.

“In this case the number n of iterations required to reach the predefined height of 6,000 meters may be chosen as the performance measure.  Then algorithm a1 performs better than algorithm a2 if a1 converges on the target in fewer steps than a2. If two algorithms generated the same sample after m iterations, then they would have found the target—peak P—after the same number n of iterations. The first NFL theorem tells us that the average probabilities of reaching peak P in m steps are the same for any two algorithms” (Emphasis in the original, page 10).

Since any two algorithms will have an equal average performance when all possible fitness landscapes are included, then the average number n of iterations required to locate the target is the same for any two algorithms if the averaging is done over all possible mountainous landscapes.

Therefore, Perakh concludes the no free lunch theorems of Dembski do not say anything  about the relative performance of algorithms a2 and a1 on a specific landscape. On a specific landscape, either a2 or a1 may happen to be much better than its competitor.  Perakh goes on to apply the same logic in a targetless context as well.

These points Perakh raises are well taken.  Subsequent to the writing of Perakh’s book in 2004, Dembski ultimately provided the supplemental math to cure these issues in his paper entitled, “Searching Large Spaces: Displacement and the No Free Lunch Regress” (March 2005), which is available for review here.  It should also be noted that Perakh concludes this section of chapter 11 by admitting that the No Free Lunch theorems “are certainly valid for evolutionary algorithms.”  If that is so, then there is no dispute.

6.  THE NO FREE LUNCH THEOREMS—A LITTLE MATHEMATICS

It is noted that Dembski’s first no free lunch theorem is correct. It is based upon any given algorithm performed m times. The result will be a time-ordered sample set d comprised of m measured values of f within the range Y. Let P be the conditional probability of having obtained a given sample after m iterations, for given f, Y, and m.

Then, the first equation is

Mathwhen a1 or a2 are two different algorithms.

Perakh emphasizes this summation is performed over “all possible fitness functions.”   In other words, Dembski’s first theorem proves that when algorithms are averaged over all possible fitness landscapes the results of a given search are the same for any pair of algorithms.  This is the most basic of Dembski’s theorems, but the most limited for application purposes.

The second equation applies the first one for time-dependent landscapes.  Perakh notes several difficulties in the no free lunch theorems including the fact that evolution is a “coevolutionary” process.  In other words, Dembski’s theorems apply to ecosystems that involve a set of genomes all searching for the same fixed fitness function.  But, Perakh argues that in the real biological world, the search space changes after each new generation.  The genome of any given population slightly evolves from one generation to the next.  Hence, the search space that the genomes are searching is modified with each new generation.

Chess

The game of Chess is played one successive procedural (evolutionary) step at a time. With each successive move (mutation) on the chessboard, the chess-playing algorithm must search for a different and new board configuration as to the next move the computer program (natural selection) should select for.

The no free lunch models discussed here are comparable to the computer chess game mentioned above.   With each slight modification (Darwinian gradualism) in the step by step process of the chess game, the pieces end up in different locations on the chessboard so that the search process starts all over again with a new and different search for a new target than the preceding search.

There is one optimum move that is better than others, which might be a preferred target.  Any other reasonable move on the chessboard is a fitness function.  But, the problem in evolution is not as clear. Natural selection is not only blind, and therefore conducts a blind search, but does not know what the target should be either.

Where Perakh is leading to with this foundation is he is going to suggest in the next section that given a target up front, like the chess solving algorithm has, there might be enough information  in the description of the target itself to assist the algorithm to succeed in at least locating a fitness function.  Whether Perakh is correct or not can be tested by applying the math.

As aforementioned, subsequent to the publication of Perakh’s book, Dembski ultimately provided the supplemental math to cure these issues in his paper entitled, “Searching Large Spaces: Displacement and the No Free Lunch Regress” (March 2005), which is available for review here.  It should also be noted that Perakh concludes this section of the chapter by admitting that the No Free Lunch theorem “are certainly valid for evolutionary algorithms.”

7.  THE DISPLACEMENT PROBLEM

As already mentioned, the no free lunch theorems show that for evolutionary algorithms to output CSI they first received a prior input of CSI.  There’s a term to describe this. It’s called displacement.  Dembski wrote in a paper entitled “Evolution’s Logic of Credulity:
An Unfettered Response to Allen Orr” (2002) the key point of writing No Free Lunch concerns displacement.  The “NFL theorems merely exemplify one instance not the general case.”

Dembski continues to explain displacement,

“The basic idea behind displacement is this: Suppose you need to search a space of possibilities. The space is so large and the possibilities individually so improbable that an exhaustive search is not feasible and a random search is highly unlikely to conclude the search successfully. As a consequence, you need some constraints on the search – some information to help guide the search to a solution (think of an Easter egg hunt where you either have to go it cold or where someone guides you by saying ‘warm’ and ‘warmer’). All such information that assists your search, however, resides in a search space of its own – an informational space. So the search of the original space gets displaced to a search of an informational space in which the crucial information that constrains the search of the original space resides” (Emphasis in the original, http://tinyurl.com/b3vhkt4).

8.  THE IRRELEVANCE OF THE NFL THEOREMS

In the conclusion of his paper, Searching Large Spaces: Displacement and the No Free Lunch Regress (2005), Dembski writes:

“To appreciate the significance of the No Free Lunch Regress in this latter sense, consider the case of evolutionary biology. Evolutionary biology holds that various (stochastic) evolutionary mechanisms operating in nature facilitate the formation of biological structures and functions. These include preeminently the Darwinian mechanism of natural selection and random variation, but also others (e.g., genetic drift, lateral gene transfer, and symbiogenesis). There is a growing debate whether the mechanisms currently proposed by evolutionary biology are adequate to account for biological structures and functions (see, for example, Depew and Weber 1995, Behe 1996, and Dembski and Ruse 2004). Suppose they are. Suppose the evolutionary searches taking place in the biological world are highly effective assisted searches qua stochastic mechanisms that successfully locate biological structures and functions. Regardless, that success says nothing about whether stochastic mechanisms are in turn responsible for bringing about those assisted searches.” (http://www.designinference.com/documents/2005.03.Searching_Large_Spaces.pdf).

Up until this juncture, Perakh admits, “Within the scope of their legitimate interpretation—when the conditions assumed for their derivation hold—the NFL theorems certainly apply” to evolutionary algorithms.  The only question so far in his critique up until this section was that he has argued the NFL theorems do not hold in the case of coevolution.  However, subsequent to this critique, Dembski resolved those issues.

Here, Perakh reasons that even if the NFL theorems were valid for coevolution, he still rejects Dembski’s work because they are irrelevant.  According to Perakh, if evolutionary algorithms can outperform random sampling, aka a “blind search,” then the NFL theorems are meaningless.  Perakh bases this assertion on the statement by Dembski on page 212 of No Free Lunch, which provides, “The No Free Lunch theorems show that evolutionary algorithms, apart from careful fine-tuning by a programmer, are no better than blind search and thus no better than pure chance.”

Therefore, for Perakh, if evolutionary algorithms refute this comment by Dembski by outperforming a blind search, then this is evidence the algorithms are capable of generating CSI.  If evolutionary algorithms generate CSI, then Dembski’s NFL theorems have been soundly falsified, along with ID Theory as well.  If such were the case, then Perakh would be correct, the NFL theorems would indeed be irrelevant.

Perakh rejects the intelligent design “careful fine-tuning by a programmer” terminology in favor of just as reasonable of a premise:

“If, though, a programmer can design an evolutionary algorithm which is fine-tuned to ascend certain fitness landscapes, what can prohibit a naturally arising evolutionary algorithm to fit in with the kinds of landscape it faces?” (Page 19)

Perakh explains how his thesis can be illustrated:

“Naturally arising fitness landscapes will frequently have a central peak topping relatively smooth slopes. If a certain property of an organism, such as its size, affects the organism’s survivability, then there must be a single value of the size most favorable to the organism’s fitness. If the organism is either too small or too large, its survival is at risk. If there is an optimal size that ensures the highest fitness, then the relevant fitness landscape must contain a single peak of the highest fitness surrounded by relatively smooth slopes” (Page 20).

The graphs in Fig. 11.1 schematically illustrate Perakh’s thesis:

Fitness Function

This is Figure 11.1 in Perakh’s book – Fitness as a function of some characteristic, in this case the size of an animal. Solid curve – the schematic presentation of a naturally arising fitness function, wherein the maximum fitness is achieved for a certain single-valued optimal animal’s size. Dashed curve – an imaginary rugged fitness function, which hardly can be encountered in the existing biosphere.

Subsequent to Perakh’s book published in 2004, Dembski did indeed resolve the issue raised here in his paper, “Conservation of Information in Search: Measuring the Cost of Success” (Sept. 2009), http://evoinfo.org/papers/2009_ConservationOfInformationInSearch.pdf. Dembski’s “Conservation of Information” paper starts with the foundation that there have been laws of information already discovered, and that idea’s such as Perakh’s thesis were falsified back in 1956 by Leon Brillouin, a pioneer in information theory.   Brillouin wrote, “The [computing] machine does not create any new information, but it performs a very valuable transformation of known information” (L. Brillouin, Science and Information Theory. New York: Academic, 1956).

In his paper, “Conservation of Information,” Dembski and his coauthor, Robert Marks, go on to demonstrate how laws of conservation of information render evolutionary algorithms incapable of generating CSI as Perakh had hoped for.  Throughout this chapter, Perakh continually cited the various works of information theorists, Wolpert and Macready.  On page 1051 in “Conservation of Information” (2009), Dembski and Marks also quote Wolpert and Macready:

“The no free lunch theorem (NFLT) likewise establishes the need for specific information about the search target to improve the chances of a successful search.  ‘[U]nless you can make prior assumptions about the . . . [problems] you are working on, then no search strategy, no matter how sophisticated, can be expected to perform better than any other.’  Search can be improved only by “incorporating problem-specific knowledge into the behavior of the [optimization or search] algorithm” (D. Wolpert and W. G. Macready, ‘No free lunch theorems for optimization,’ IEEE Trans. Evol. Comput., vol. 1, no. 1, pp. 67–82, Apr. 1997).”

In “Conservation of information” (2009), Dembski and Marks resoundingly demonstrate how conservation of information theorems indicate that even a moderately sized search requires problem-specific information to be successful.  The paper proves that any search algorithm performs, on average, as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure.

Throughout “Conservation of information” (2009), the paper discusses evolutionary algorithms at length:

“Christensen and Oppacher note the ‘sometimes outrageous claims that had been made of specific optimization algorithms.’ Their concern is well founded. In computer simulations of evolutionary search, researchers often construct a complicated computational software environment and then evolve a group of agents in that environment. When subjected to rounds of selection and variation, the agents can demonstrate remarkable success at resolving the problem in question.  Often, the claim is made, or implied, that the search algorithm deserves full credit for this remarkable success. Such claims, however, are often made as follows: 1) without numerically or analytically assessing the endogenous information that gauges the difficulty of the problem to be solved and 2) without acknowledging, much less estimating, the active information that is folded into the simulation for the search to reach a solution.” (Conservation of information, page 1058).

Dembski and Marks remind us that the concept Perakh is suggesting for evolutionary algorithms to outperform a blind search is the same scenario in the analogy of the proverbial monkeys typing on keyboards.

The monkeys at typewriters is a classic analogy to describe the chances of evolution being successful to achieve specified complexity.

A monkey at a typewriter is a good illustration for the viability of random evolutionary search.  Dembski and Marks run the calcs for good measure using factors of 27 (26 letter alphabet plus a space) and a 28 character message.  The answer is 1.59 × 1042, which is more than the mass of 800 million suns in grams.

In their Conclusion, Dembski and Marks state:

 “Endogenous information represents the inherent difficulty of a search problem in relation to a random-search baseline. If any search algorithm is to perform better than random search, active information must be resident. If the active information is inaccurate (negative), the search can perform worse than random. Computers, despite their speed in performing queries, are thus, in the absence of active information, inadequate for resolving even moderately sized search problems. Accordingly, attempts to characterize evolutionary algorithms as creators of novel information are inappropriate.” (Conservation of information, page 1059).

9.  THE DISPLACEMENT “PROBLEM”

This argument is based upon the claim by Dembski in page 202 of his book, “No Free Lunch, “ in which he states, “The significance of the NFL theorems is that an information-resource space J does not, and indeed cannot, privilege a target T.”  However, Perakh highlights a problem with Dembski’s statement because the NFL theorems contain nothing about any arising ‘information-resource space.’  If Dembski wanted to introduce this concept within the framework of the NFL theorems, then he should have at least shown what the role of an “information-resource space” is in view of the “black-box” nature of the algorithms in question.

On page 203 of No Free Lunch, Dembski introduces the displacement problem:

“… the problem of finding a given target has been displaced to the new problem of finding the information j capable of locating that target. Our original problem was finding a certain target within phase space. Our new problem is finding a certain j within the information-resource space J.”

Perakh adds that the NFL theorems are indifferent to the presence or absence of a target in a search, which leaves the “displacement problem,” with its constant references to targets, hanging in the air.

Dembski’s response is as follows:

What is the significance of the Displacement Theorem? It is this. Blind search for small targets in large spaces is highly unlikely to succeed. For a search to succeed, it therefore needs to be an assisted search. Such a search, however, resides in a target of its own. And a blind search for this new target is even less likely to succeed than a blind search for the original target (the Displacement Theorem puts precise numbers to this). Of course, this new target can be successfully searched by replacing blind search with a new assisted search. But this new assisted search for this new target resides in a still higher-order search space, which is then subject to another blind search, more difficult than all those that preceded it, and in need of being replaced by still another assisted search.  And so on. This regress, which I call the No Free Lunch Regress, is the upshot of this paper. It shows that stochastic mechanisms cannot explain the success of assisted searches.

“This last statement contains an intentional ambiguity. In one sense, stochastic mechanisms fully explain the success of assisted searches because these searches themselves constitute stochastic mechanisms that, with high probability, locate small targets in large search spaces. Yet, in another sense, for stochastic mechanisms to explain the success of assisted searches means that such mechanisms have to explain how those assisted searches, which are so effective at locating small targets in large spaces, themselves arose with high probability.  It’s in this latter sense that the No Free Lunch Regress asserts that stochastic mechanisms cannot explain the success of assisted searches.” [Searching Large Spaces: Displacement and the No Free Lunch Regress (2005)].

Perakh makes some valid claims.  About seven years later after the publication of Perakh’s book, Dembski provided updated calcs to the NFL theorems and his application of math to the displacement problem.  This is available for review in his paper, “The Search for a Search: Measuring the Information Cost of Higher Level Search” (2010).

Perakh discusses the comments made by Dembski to support the assertion CSI must be necessarily “smuggled” or “front-loaded” into evolutionary algorithms.  Perakh outright rejects Dembski’s claims, and proceeds to dismiss Dembski’s work on very weak grounds in what appears to be a hand-wave, begging the question as to how the CSI was generated in the first place, and overall circular reasoning.

Remember that the basis of the NFL theorems is to show that when CSI shows up in nature, it is only because it originated earlier in the evolutionary history of that population, and got smuggled into the genome of a population by regular evolution.   The CSI might have been front-loaded millions of years earlier in the biological ancestry.  The front-loading of the CSI may have occurred possibly in higher taxa.  Regardless from where the CSI originated, the claim by Dembski is that the CSI appears visually now because it was inserted earlier because evolutionary processes do not generate CSI.

The smuggling forward of CSI in the genome is called displacement.  The reason why the alleged law of nature called displacement occurs is because when applying Information Theory to identify CSI, the target of the search theorems is the CSI itself.  Dembski explains,

“So the search of the original space gets displaced to a search of an informational space in which the crucial information that constrains the search of the original space resides. I then argue that this higher-order informational space (‘higher’ with respect to the original search space) is always at least as big and hard to search as the original space.” (Evolution’s Logic of Credulity: An Unfettered Response to Allen Orr, 2012.)

It is important to understand what Dembski means by displacement here because Perakh distorts displacement to mean something different in this section.  Perakh asserts:

“An algorithm needs no information about the fitness function. That is how the ‘black-box’ algorithms start a search. To continue the search, an algorithm needs information from the fitness function. However, no search of the space of all possible fitness function is needed. In the course of a search, the algorithm extracts the necessary information from the landscape it is exploring. The fitness landscape is always given, and automatically supplies sufficient information to continue and to complete the search.” (Page 24)

To support these contentions, Perakh references Dawkins’s weasel algorithm for comparison.  The weasel algorithm, says Perakh, “explores the available phrases and selects from them using the comparison of the intermediate phrases with the target.” Perakh then argues the fitness function has in the weasel example the built-in information necessary to perform the comparison.  Perakh then concludes,

“This fitness function is given to the search algorithm; to provide this information to the algorithm, no search of a space of all possible fitness functions is needed and therefore is not performed.” (Emphasis in original, Page 24)

If Perakh is right, then the same is true for natural evolutionary algorithms. Having bought his own circular reasoning he then declares that his argument therefore renders Dembski’s “displacement problem” to be “a phantom.” (Page 24)

One of the problems with this argument is that Perakh admits that there is CSI, and offers no explanation as to how it originates and increases in the genome of a population that results in greater complexity.  Perakh is begging the question.  He offers no math, no algorithm, no calcs, no example.  He merely imposes his own properties of displacement upon the application, which is a strawman argument, and then shoots down displacement.  There’s no attempt to derive how the algorithm ever finds the target in the first place, which is disappointing given that Dembski provides the math to support his own claims.

Perakh appears to be convinced that evolutionary algorithmic searches taking place in the biological world are highly effective assisted searches that successfully locate target biological structures and functions.  And, as such, he is satisfied that these evolutionary algorithms can generate CSI. What Perakh needs to remember is that a genuine evolutionary algorithm is still a stochastic mechanism. The hypothetical success of the evolutionary algorithm says nothing about whether stochastic mechanisms are in turn responsible for bringing about those assisted searches.  Dembski explains,

“Evolving biological systems invariably reside in larger environments that subsume the search space in which those systems evolve. Moreover, these larger environments are capable of dramatically changing the probabilities associated with evolution as occurring in those search spaces. Take an evolving protein or an evolving strand of DNA. The search spaces for these are quite simple, comprising sequences that at each position select respectively from either twenty amino acids or four nucleotide bases. But these search spaces embed in incredibly complex cellular contexts. And the cells that supply these contexts themselves reside in still higher-level environments.” [Searching Large Spaces: Displacement and the No Free Lunch Regress (2005), pp. 31-32]

Dembski argues that the uniform probability on the search space almost never characterizes the system’s evolution, but instead it is a nonuniform probability that brings the search to a successful conclusion.  The larger environment brings upon the scenario the nonuniform probability.  Dembski notes that Richard Dawkins made the same point as Perakh in Climbing Mount Improbable (1996).  In that book, Dawkins argued that biological structures that at first appearance seem impossible with respect to the uniform probability, blind search, pure randomness, etc., become probable when the probabilities are reset by evolutionary mechanisms.

Propagation

This diagram shows propagation of active information
through two levels of the probability hierarchy.

The kind of search Perakh presents is also addressed in “The Search for a Search: Measuring the Information Cost of Higher Level Search” (2010).  The blind search Perakh complains of is that of uniform probability.  In this kind of problem, given any probability measure on Ω, Dembski’s calcs indicate the active entropy for any partition with respect to a uniform probability baseline will be nonpositive (The Search for a Search, page 477).  We have no information available about the search in Perakh’s example.  All Perakh gives us is that the fitness function is providing the evolutionary algorithm clues so that the search is narrowed.  But, we don’t know what that information is.  Perakh’s just speculating that given enough attempts, the evolutionary algorithm will get lucky and outperform the blind search.  Again, this describes uniform probability.

According to Dembski’s much intensified mathematical analysis, if no information about a search exists so that the underlying measure is uniform, which matches Perakh’s example, “then, on average, any other assumed measure will result in negative active information, thereby rendering the search performance worse than random search.” (The Search for a Search, page 477).

Dembski expands on the scenario:

“Presumably this nonuniform probability, which is defined over the search space in question, splinters off from richer probabilistic structures defined over the larger environment. We can, for instance, imagine the search space being embedded in the larger environment, and such richer probabilistic structures inducing a nonuniform probability (qua assisted search) on this search space, perhaps by conditioning on a subspace or by factorizing a product space. But, if the larger environment is capable of inducing such probabilities, what exactly are the structures of the larger environment that endow it with this capacity? Are any canonical probabilities defined over this larger environment (e.g., a uniform probability)? Do any of these higher level probabilities induce the nonuniform probability that characterizes effective search of the original search space? What stochastic mechanisms might induce such higher-level probabilities?  For any interesting instances of biological evolution, we don’t know the answer to these questions. But suppose we could answer these questions. As soon as we could, the No Free Lunch Regress would kick in, applying to the larger environment once its probabilistic structure becomes evident.” [Searching Large Spaces: Displacement and the No Free Lunch Regress (2005), pp. 32]

The probabilistic structure would itself require explanation in terms of stochastic mechanisms.  And, the No Free Lunch Regress blocks any ability to account for assisted searches in terms of stochastic mechanisms. (“Searching Large Spaces: Displacement and the No Free Lunch Regress” (2005).

Today, Dembski has updated his theorems to present by supplying additional math and contemplations.  The NFL theorems today are analyzed in both a vertical and horizontal considerations in three-dimensional space.

3-D Geometry

3-D Geometric Application of NFL Theorems

This diagram shows a three dimensional simplex in {ω1, ω2, ω3}The numerical values of a1, a2 and a3 are one.  The 3-D box in the figure presents two congruent triangles in a geometric approach to presenting a proof of the Strict Vertical No Free Lunch Theorem.  In “The Search for a Search: Measuring the Information Cost of Higher Level Search” (2010), the NFL theorems are analyzed both horizontally and vertically.  The Horizontal NFL Theorem pertains to showing the average relative performance of searches never exceeds unassisted or blind searches.  The Vertical NFL Theorem shows that the difficulty of searching for a successful search increases exponentially with respect to the minimum allowable active information being sought.

This leads to the displacement principle, which holds that “the search for a good search is at least as difficult as a given search.”   Perakh might have raised a good point, but Dembski’s done the math and confirmed his theorems are correct.  Dembski’s math does work out, he’s provided the proofs, and shown the work.  On the other hand, Perakh merely offered an argument that was nothing but an unverified speculation with no calcs to validate his point.

V.  CONCLUSION

In the final section of this chapter, Perakh reiterates the main points throughout his article for review. He begins by saying,

“Dembski’s critique of Dawkins’s ‘targeted’ evolutionary algorithm fails to repudiate the illustrative value of Dawkins’s example, which demonstrates how supplementing random changes with a suitable law increases the rate of evolution by many orders of magnitude.” (Page 25)

No, this is a strawman.  There was nothing Perakh submitted to establish such a conclusion.  Neither Dembski or the Discovery Institute have any dispute with Darwinian mechanisms of evolution.  The issue is whether ONLY such mechanisms are responsible for specified complexity (CSI).  Intelligent Design proponents do not challenge that “supplementing random changes with a suitable law increases the rate of evolution by many orders of magnitude.”

Next, Perakh claims, “Dembski ignores Dawkins’s targetless’ evolutionary algorithm, which successfully illustrates spontaneous increase of complexity in an evolutionary process.” (Page 25).

No, this isn’t true.  First, Dembski did not ignore the Dawkins’ weasel algorithm.  Second, the weasel algorithm isn’t targetless.  We’re given the target up front.  We know exactly what it is.  Third, the weasel algorithm did not show any increase in specified complexity. All the letters in the sequence already existed. When evolution operates in the real biological world, the genome of the population is reshuffled from one generation to the next.  No new information is increasing leading to greater complexity.  The morphology is a result from the same information being rearranged.

In the case of the Weasel example, the target was already embedded in the original problem, just like one and only one full picture is possible to assemble from pieces of a jigsaw puzzle.  When the puzzle is completed, not one piece should be missing, unless one was lost, and there should not be one extra piece too many.  The CSI was the original picture that was cut up into pieces to be reassembled.  The Weasel example is actually a better illustration for front-loading.  All the algorithm had to do was figure out how to arrange the letters back into the proper intelligible sequence.

The CSI was specified in the target or fitness function up front to begin with.  The point of the NFL theorems indicates that if the weasel algorithm was a real life evolutionary example, then that complex specified information (CSI) would have been inputted into the genome of that population in advance.  But, the analogy quickly breaks down for many reasons.

Perakh then falsely asserts, “Contrary to Dembski’s assertions, evolutionary algorithms routinely outperform a random search.”  (Page 25). This is false.  Perakh speculated that this was a possibility, and Dembski clearly not only refuted it, but demonstrated that evolutionary algorithms essentially never outperform a random search.

Perakh next maintains:

“Contrary to Dembski assertion, the NFL theorems do not make Darwinian evolution impossible. Dembski’s attempt to invoke the NFL theorems to prove otherwise ignores the fact that these theorems assert the equal performance of all algorithms only if averaged over all fitness functions.” (Page 25).

No, there’s no such assertion by Dembski.  This is nonsense.  Intelligent Design proponents do not assert any false dichotomy.  ID Theory supplements evolution, providing the conjecture necessary to really explain the specified complexity.  Darwinian evolution still occurs, but it only explains inheritance and diversity.  It is ID Theory that explains complexity.  As far as the NFL theorems asserting the equal performance of all or any algorithms to solve blind searches, this is ridiculous and never was established by Perakh.

Perakh also claims:

“Dembski’s constant references to targets when he discusses optimization searches are based on his misinterpretation of the NFL theorems, which entail no concept of a target. Moreover, his discourse is irrelevant to Darwinian evolution, which is a targetless process.” (Page 25).

No, Dembski did not misinterpret the very NFL theorems that he invented.  The person that misunderstands and misrepresents them is Perakh.  It is statements like this that cause one to suspect of Perakh understands what CSI might be, either.  If you notice the trend in his writing, when Perakh looked for support for an argument, he referenced those who have authored rebuttals in opposition to Dembski’s work.  But, when Perakh looked for an authority to explain the meaning of Dembski’s work, Perakh nearly always cited Dembski himself.  Perakh never performs any math to support his own challenges.  Finally, Perakh ever established anywhere that Dembski misunderstood or misapplied any of the principles of Information Theory.

Finally, Perakh ends the chapter with this gem:

“The arguments showing that the anthropic coincidences do not require the hypothesis of a supernatural intelligence also answer the questions about the compatibility of fitness functions and evolutionary algorithms.” (Page 25).

This is a strawman.  ID Theory has nothing to do with the supernatural.  If it did, then it would not be a scientific theory by definition of science, which is bases upon empiricism.   As one can certainly see is obvious in this debate is that Intelligent Design theory is more aligned to Information Theory than most sciences.  ID Theory is not about teleology, but is more about front-loading.

William Dembski’s work is based upon pitting “design” against chance. In his book, The Design Inference he used mathematical theorems and formulas to devise a definition for design based upon a mathematical probability. It’s an empirical way to work with improbable complex information patterns and sequences. It’s called specified complexity, or aka complex specified information (CSI). There’s no contemplation as to the source of the information other than it being front-loaded.  ID Theory only involves a study of the information (CSI) itself. Design = CSI. We can study CSI because it is observable.

There is absolutely no speculation of any kind to suggest that the source of the information is by extraterrestrial beings or any other kind of designer, natural or non-natural. The study is only the information (CSI) itself — nothing else. There are several non-Darwinian conjectures as to how the information can develop without the need for designers.  Other conjectures are panspermia, natural genetic engineering, and what’s called “front-loading.”

In ID, “design” does not require designers. It can be equated to be derived from “intelligence” as per William Dembski’s book, “No Free Lunch,” but he uses mathematics to support his work, not metaphysics. The intelligence could be illusory. All the theorems detect is extreme improbability because that’s all the math can do. And, it’s called “Complex Specified Information.” It’s the Information that ID Theory is about. There’s no speculation into the nature of the intelligent source, assuming that Dembski was right in determining the source is intelligent in the first place. All it takes really is nothing other than a transporter of the information, which could be an asteroid, which collides with Earth carrying complex DNA in the genome of some unicellular organism. You don’t need a designer to validate ID Theory; ID has nothing to do with designers except for engineers and intelligent agents that are actually observable.

Posted in COMPLEX SPECIFIED INFORMATION (CSI) | Tagged , , , , , , , , | Leave a comment

COMPLEX SPECIFIED INFORMATION (CSI) – An Explanation of Specified Complexity

This entry is a sequel to my original blog essay on CSI, which was a more elementary discussion that can be reviewed hereI highly recommend also watching this YouTube video, which does an excellent job in describing CSI.

Complex Specified Information (CSI) is also called specified complexity.  CSI is a concept that is not original to Dr. William Dembski.  Specified Complexity was first noted in 1973 by Origin of Life researcher, Leslie Orgel:

Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity. [ L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added.]

Before beginning this discussion on CSI, it should be understood first as to why it is important.  The scientific theory of Intelligent Design (ID) is based upon important concepts, such as design, information, and complexity.  Design in the context of ID Theory is discussed in terms of CSI.  The following is the definition of ID Theory:

Intelligent Design Theory in Biology is the scientific theory that artificial intervention is a universally necessary condition of the first initiation of life, development of the first cell, and increasing information in the genome of a population leading to greater complexity evidenced by the generation of original biochemical structures.

Authorities:

* Official Discovery Institute definition: http://www.intelligentdesign.org/whatisid.php
* Stephen Meyer’s definition: http://www.discovery.org/v/1971
* Casey Luskin’s Discussion: http://www.evolutionnews.org/2009/11/misrepresenting_the_definition028051.html
* William Dembski’s definition: http://www.uncommondescent.com/id-defined

Please observe that this expression of ID Theory does not appeal to any intelligence or designer. Richard Dawkins was correct when he said that what is thought to be design is illusory. Design is defined by William Demski as Complex Specified Information (CSI).

“Intelligent Design” is an extremely anthropomorphic concept in itself.  The Discovery Institute does not work much with the term “intelligence.” The key to ID Theory is not in the term “intelligence,” but in William Dembski’s work in defining design. And, that is “Complex Specified Information” (CSI). It’s CSI that is the technical term that ID scientists work with. Dembski produced the equations, ran the calculations, and provided a scientifically workable method to determine whether some phenomenon is “designed.” According to Dembski’s book, “The Design Inference” (1998), CSI is based upon statistical probability.

CSI is based upon the theorem:

sp(E) and SP(E) —-> D(E)

When a small probably (SP) event (E) is complex, and

SP(E) = [P (E|I) < the Universal Probability Bound]. Or, in English, we know an event E is a small probably event when the probability of event E given I is less than the Universal Probability Bound. I = All relevant side information and all stochastic hypotheses. This is all in Dembski’s book, “The Design Inference.”

An event E is specified by a pattern independent of E, or expressed mathematically: sp(E). Upper case letters SP(E) are the small probability event we are attempting to determine is CSI, or designed. Lower case letters sp(E) are a prediction that we will discover the SP(E). Therefore, if sp(E) and SP(E) then —-> D(E). D(E) means the event E is not only small probability, but we can conclude it is designed.

Royal Flush

Dembski’s Universal Probability Bound = 0.5 x 10–150, or 0.5 times 10 to the exponent negative 150 power. This is the magic number when one can scientifically be justified to invoke design. It’s been said that using Dembski’s formula, the probability that Dembski states must be matched in order to ascribe design is to announce in advance before dealing that you are going to be dealt 24 Royal Flushes in a row, and then the event plays out just exactly as the advance forecast. In other words, just as intelligence might be entirely illusory, so likewise CSI is nothing other than a mathematical ratio that might not have anything in the world to do with actual design.

The probability of dealing a Royal Flush given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement is 649,739 to 1. According to Dembski, if someone were to be dealt a Royal Flush 24 times in a row upon the advance announcement predicting such a happening would take place, his contentions would be that it was so improbable that someone cheated, or “design” would have had to been involved.

(I should say parenthetically here that I am oversimplifying CSI just for the sake of making this point that we have already imposed upon the term “design” a technical definition that requires no intelligence or design as we understand the everyday normal use of the words. Therefore, for one to take issue with the ingenious marketing term “Intelligent Design” is meaningless because what label the theory is called is irrelevant.  Such a dispute on that issue is nothing other than haggling about nomenclature. The Discovery Institute could have labeled their product by any name. I would have preferred the title, “Bio-information Theory,” but the name is inconsequential.)

What is a very helpful concept to understand about CSI is that just as improbable as it is to be dealt a Royal Flush, so likewise the level of difficulty natural selection is up against to produce what appears to be designed in nature. And, when CSI is observed in nature, which occurs occasionally, then that not only confirms ID predictions, and defies Darwinian gradualism, but also tips a scientist a clue that such might be evidence of additional ID-related mechanisms at work.

It is true that William Dembski’s theorems are based upon an assumption that we can quantify everything in the universe; no argument there. But, he only used that logic to derive his Universal Probability Bound, which is nearly an infinitesimally small number: 0.5 x 10 –150 , or 0.5 times 10 to the exponent negative 150 power.  Do you not think that when a probability is this low that it is a safe bet to invoke corruption of natural processes by an intelligent agency? The number is a useful number. If anyone believes the ratio Dembski submitted is flawed, I would request that person offer a different number that they believe would more accurately eliminate chance in favor of design.

Design theorists are interested in studying complexity.  The more complex the information is, the better.  CSI is best understood as some kind of pattern that is so improbable that the chances of such a configuration occurring by sheer happenstance is extremely small. Dembksi’s formulas and theorems work best when there is an extreme low probability.

It is a given that we do not know everything in the universe, such as intangible variables as dark matter, where neutrinos go when they zip in and out of our universe, and cosmic background radiation. Dembski was aware of these unknown variables, it isn’t as if he ignored them when deriving his theorems. The Universal Probability Bound is not a perfectly absolute number, but it is very much a scientifically workable number no less credible than the variables used to work the equations in support of the Big Bang Theory. So, if one seeks to disqualify CSI on the sole basis that we do not know everything in the universe, then they just eliminated the Big Bang Theory as scientifically viable.

A religious person is welcome to invoke a teleological inference of a deity, but the moment one does that, they have departed from science and are entertaining religion.  CSI might or might not infer design. That’s the whole point of Dembski’s book, “The Design Inference.” In the book he expands on the meaning of CSI, and then proceeds to present his reasoning as to why CSI infers design. While those who reject ID Theory are seeing invisible designers hiding behind every tree, the point Dembski makes is we must first establish design to begin with.

The Delicate Arch in Utah.  Is this bridge a product of design?

The Delicate Arch in Utah is a natural bridge.  It is difficult to debate whether this is an example of specified complexity because some might argue the natural arch is “specified” in the sense that it is a natural bridge.

The arguments in favor that natural arches are specified would emphasize the meaningfulness and functionality of the monument as a bridge.  Also, just the mere fact that attention is be drawn to this particular natural formation is in-and-of-itself a form of specification.

Arguments in opposition that such a natural arch is specified would emphasize the fact that human experience has already observed geological processes are capable of producing such a natural formation.  Also, a natural arch is a one-of-a-kind structure where no two arches resemble each other to such detail that the identity of one could be mistaken for the other.  Finally, the concept emphasized by William Dembski is that specification relates to a prediction.  In other words, had someone drawn this arch in advance on a piece of paper even though they had never seen the actual monument, and then later the land formation is discovered in Utah, which happens to be an exact replica of the drawing, then the design theorists will declare the information is specified.

The meaning of the term specified is very important to understanding CSI.  The term “specified” in a certain sense either directly or indirectly refers to a prediction. If someone deals you a Royal Flush, the pattern would be complex. If you’re dealt a Royal Flush again several consecutive times, someone at the poker table is going to accuse you of cheating. The sequence now is increasing in improbability and complexity.  A Royal Flush is specified because it is a pattern that many people are aware of and have identified in advance.

Now, if you or the dealer ANNOUNCES IN ADVANCE that they are gong to deal you a Royal Flush, and sure enough it happens, that there is no longer any question that the target sequence was “specified.”

Dembski explains how the item of being specified might be best understood in discussing what he calls Conditionally independent patterns.  In applying his Explanatory Filter, Dembski states:

The patterns that in the presence of complexity or improbability implicate a designing intelligence must be independent of the event whose design is in question. Crucial here is that patterns not be artificially imposed on events after the fact. For instance, if an archer shoots arrows at a wall and we then paint targets around the arrows so that they stick squarely in the bull’s-eyes, we impose a pattern after the fact. Any such pattern is not independent of the arrow’s trajectory. On the other hand, if the targets are set up in advance (“specified”) and then the archer hits them accurately, we know it was not by chance but rather by design (provided, of course, that hitting the targets is sufficiently improbable). The way to characterize this independence of patterns is via the probabilistic notion of conditional independence. A pattern is conditionally independent of an event if adding our knowledge of the pattern to a chance hypothesis does not alter the event’s probability under that hypothesis. The specified in specified complexity refers to such conditionally independent patterns. These are the specifications.  [From: William A. Dembski, The Design Revolution: Answering the Toughest Questions About Intelligent Design (Downers Grove, IL: InterVarsity Press, 2004), 81.]

Mount Rushmore is an example of CSI because it relays information that is both complex and specified. More than just complexity of a hillside, it features the specified information of identifiable U.S. presidents.

Is Mount Rushmore a product of natural processes or an intelligent cause?  Most people would likely agree that this rock formation in Black Hills, South Dakota is the result of intelligent design.   I believe that an intelligent agent is responsible for this rock formation. This is based upon reasoning.  Notice that when you determined for yourself this monument is a deliberately sculptured work done by an intelligent cause (I assume you did so) that you did not draw upon a religious view to arrive at that conclusion.

 

The crevices and natural coloration of the rock at Eagle Rock, California portray a remarkable illusion to an eagle in flight. 

Snowflakes are also considered when contemplating CSI.  Snowflakes are very complex, and appear to also be specified.  However, in spite of the great detail, recognizable pattern, and beauty of a snowflake, no two snowflakes are alike.  A snowflake would be specified if a second one were to be found that identically matched the first.

Snowflake1

Snowflake2Snowflake3

 The shapes of snow crystals are due to the laws of physics, which determines their regular geometric 6-pointed pattern.  As such, a snowflake has no CSI whatever because snowflakes are produced by natural processes.  The snowflake is complex, but not complex specified information.  Meteorological conditions also are a factor in the shape formation of a snow crystal.  So, snow is a product of both physical laws and chance.  There’s one thing to note about snow crystals.  Due to the fact that they form from atmospheric conditions governed by laws of physics, the complexity is noted, but they still have a degree of simplicity to them in spite of the infinite number of configurations they might appear as in shape.

William Dembski has been called on snowflakes in the past by his critics who see snowflakes to be just as every bit complex as other simple objects that are known to be designed.   It is true that the complexity of snow crystals make them good candidates for evidence of design.  This is why the concept of being specified is so important.  As intricate as details there might be found with snow, it is the lack of specificity that causes snow crystals to not be CSI.  The shortcut way to test whether snowflakes are designed would be to find two snowflakes that are identical.  The probability for one snowflake to exist is 1 to 1.  It’s the fact that the probability is small for the identical replica to reoccur a second time that would be evidence of design. This is what is meant by being specified.  Specification in the context of ID Theory is not mere intricacy of detailed patterns alone. 

While some ID critics believe snowflakes refute Dembski’s Explanatory Filter because they consider the extreme low probability to infer that snowflakes are designed, I see it as just the opposite.  It is the fact that we know as a given snowflakes are not designed that should lend us confidence in the cases which the Explanatory Filter determines some feature is designed.

This brings up an important point about CSI.  There are many instances where information is highly complex and appears to be specified as well, such as snowflakes.   Information can be arranged in various different degrees of complexity and specificity.   Yet, it is only CSI when the improbability reaches the Universal Probability Bound.  Then what would we call something that just looks like CSI, but it isn’t CSI because a pattern is determined not to be designed upon application of Dembski’s Explanatory Filter?   When CSI just looks like it might be CSI, but it actually isn’t, just like snowflakes, then Dembski calls this Specificational complexity

Dembski explains:

Because they are patterns, specifications exhibit varying degrees of complexity. A specification’s degree of complexity determines how many specificational resources must be factored in when gauging the level of improbability needed to preclude chance (see the previous point). The more complex the pattern, the more specificational resources must be factored in. The details are technical and involve a generalization of what mathematicians call Kolmogorov complexity. Nevertheless, the basic intuition is straightforward. Low specificational complexity is important in detecting design because it ensures that an event whose design is in question was not simply described after the fact and then dressed up as though it could have been described before the fact.

To see what’s at stake, consider the following two sequences of ten coin tosses: HHHHHHHHHH and HHTHTTTHTH. Which of these would you be more inclined to attribute to chance? Both sequences have the same probability, approximately 1 in 1,000. Nevertheless, the pattern that specifies the first sequence is much simpler than the second. For the first sequence the pattern can be specified with the simple statement “ten heads in a row.” For the second sequence, on the other hand, specifying the pattern requires a considerably longer statement, for instance, “two heads, then a tail, then a head, then three tails, then heads followed by tails and heads.” Think of specificational complexity (not to be confused with specified complexity) as minimum description length. (For more on this, see <www.mdl-research.org>.)

For something to exhibit specified complexity it must have low specificational complexity (as with the sequence HHHHHHHHHH, consisting of ten heads in a row) but high probabilistic complexity (i.e., its probability must be small). It’s this combination of low specificational complexity (a pattern easy to describe in relatively short order) and high probabilistic complexity (something highly unlikely) that makes specified complexity such an effective triangulator of intelligence. But specified complexity’s significance doesn’t end there.

Besides its crucial place in the design inference, specified complexity has also been implicit in much of the self-organizational literature, a field that studies how complex systems emerge from the structure and dynamics of their parts. Because specified complexity balances low specificational complexity with high probabilistic complexity, specified complexity sits at that boundary between order and chaos commonly referred to as the “edge of chaos.” The problem with pure order (low specificational complexity) is that it is predictable and thus largely uninteresting. An example here is a crystal that keeps repeating the same simple pattern over and over. The problem with pure chaos (high probabilistic complexity) is that it is so disordered that it is also uninteresting. (No meaningful patterns emerge from pure chaos. An example here is the debris strewn by a tornado or avalanche.) Rather, it’s at the edge of chaos, neatly ensconced between order and chaos, that interesting things happen. That’s where specified complexity sits. [From: William A. Dembski, The Design Revolution: Answering the Toughest Questions About Intelligent Design (Downers Grove, IL: InterVarsity Press, 2004), 81.]

THAT’S WHY I NEVER WALK IN THE FRONT!

This Far Side cartoon is an illustration that Michael Behe uses in his lectures to demonstrate that we can often deduce design.  Even though there is no evidence of any trappers in sight, the scene is obvious that the snare was intentionally designed, as such machinery does not naturally exist by sheer happenstance.  Behe also makes the point that there was no religious contemplation required to conclude that someone deliberately set this trap even though the source that assembled the machine is absent.

This next image is extremely graphic, but illustrates the same point.  Here, a Blue Duiker as been trapped in a snare.

Sometimes people just can’t decide whether a formation of information is a result of happenstance or intelligence.  A perfect example of what looks like design but might not be is the monuments on Mars.   Are the monuments on Mars caused by chance or design?  Are these formations on the planet natural processes or artificial?  Here’s some more interesting images that help someone better understand CSI in a simple way.

It’s also interesting to note that those antagonists who so quickly scoff at ID because of the unfair inference of designers are automatically conceding design as a given. The teleological inference works both ways, if design points to a designer, then designers require design. Without design, a designer does not exist.

As such, if one desires to oppose ID Theory, a preferable argument would be to insist design does not appear in nature, and abandon the teleological inference.

Here’s more on Complex Specified Information (CSI):

* From Discovery Institute, http://www.ideacenter.org/contentmgr/showdetails.php/id/832

* By Dembski himself, http://www.designinference.com/documents/02.02.POISK_article.htm

William Dembski’s book, “The Design Inference” (http://www.designinference.com/desinf.htm).   The Discovery Institute has written CSI here and here.

Darwinian mechanisms (which are based upon chance) will most likely not be the cause of CSI because CSI is by definition a small probability event.  CSI is not zero probability, it is small probability.  There is still a possibility that Darwinian mechanisms could produce CSI, but CSI is more likely to be caused by something that replaces the element of chance.  Darwinian mechanisms are based upon chance.  CSI is a low probability ratio that exposes the absence of chance.  Whatever the absent of chance is (call it intelligence, design, artificial interruption, quantum theory, an asteroid, abiogenesis, RNA self-replication, unknown molecular pre-life molecular configuration, epigenetics, or whatever) is the most likely cause of CSI.  As such, it is assume by ID scientists that CSI = design.

In another book written by Dembski, “Why Specified Complexity Cannot Be Purchased without Intelligence,” Dembski explains why he thinks that CSI is also linked with intelligence.   He further discusses his views here.

CSI is a mathematical ratio of probability that exposes a small probability event that reduces chance from the equation.  And, regardless of what you substitute to replace in the vacuum, the ID scientists substitute the design.  So, in ID Theory, whenever the word “design” appears, it means CSI.  And, therefore it is ridiculous and false to impose designers into the context of ID Theory because the ID definition of design is none other than CSI.

The point is that ID scientists define design as CSI.  Therefore, skeptics of ID Theory should cease invoking designers because all design means in terms of ID Theory is the mathematical absence of chance, which is mathematically expressed in terms of a low probability ratio.

CSI is an assumption, not an argument.  CSI is an axiom postulated up front based upon mathematical theorems.  It’s all couched in math.  Unless the small probably ratio reaches zero, then no one working out the calculations is going to say “cannot.”  CSI is assumed to be design, and it is assumed natural causes DON’T generate CSI because CSI by definition is a small probability event that favors the absence of chance.

We cannot be certain the source is an intelligent agency.  CSI is based upon probabilities.  There are many who credit Darwinian evolution to be the source of complexity.  This is illogical when running the design theorem calculations.  But, it is not impossible.  As Richard Dawkins has noted before, design can be illusory.  The hypothesis that Darwinian evolution is a cause for some small probability event SP (E) could be correct, but it is highly improbable according to the math.

One common demonstration to help people understand how CSI works is to take a letter sequence. This can be done with anything, but the common example is this pattern:

METHINKS•IT•IS•LIKE•A•WEASEL

This letter arrangement is used the most often to describe CSI because the math has already been worked out. The bullets represent spaces. There are 27 possibilities at each location in a symbol string 28 characters in length. If natural selection were entirely random it would take 1 x 1040 (that’s 10 to the 40th exponent, or 1 with 40 zeroes to get to the decimal point). It’s a small probability. However, natural selection (NS) is smarter than that, and Richard Dawkins has shown how NS achieves similar odds in nature in an impressive 43 attempts to get the answer correct (http://evoinfo.org/papers/ConsInfo_NoN.pdf).

In this example, the probability was only 1 x 1040. CSI is an even much higher number than that. If you take a pattern or model, such as METHINKS•IT•IS•LIKE•A•WEASEL, and you keep adding the information, you soon reach probabilities that are within the domain of CSI.

For more of Dembski’s applications using his theorems, you might like to reference these papers:

http://evoinfo.org/papers/2010_TheSearchForASearch.pdf

http://marksmannet.com/RobertMarks/REPRINTS/2010-EfficientPerQueryInformationExtraction.pdf

This is a continuation of Claude Shannon’s work. One of the most important contributors to ID Theory is American mathematician Claude Shannon (http://en.wikipedia.org/wiki/Claude_Shannon), who is considered to be the father of Information Theory (http://en.wikipedia.org/wiki/Information_Theory). Essentially, ID Theory is a sub-theory of Information Theory in the field of Bioinformatics. This is one of Dembski’s areas of expertise, http://evoinfo.org/publications/.

When Robert Deyes wrote a review on Stephen Meyer’s “Signature In The Cell,” he noted “When talking about ‘information’ and its relevance to biological design, Intelligent Design theorists have a particular definition in mind.”   Stephen Meyer explained in “Signature In The Cell” that information is: “the attribute inherent in and communicated by alternative sequences or arrangements of something that produce specific effects” (p.86).

Shannon was instrumental in the development of computer science. He invented the first robotic mouse, and computer chess. When Shannon unveiled his theory for quantifying information it included several axioms, one of which is information is inversely proportional to uncertainty. Similarly, design can be contrasted to chance.

It was Peter Medawar that referenced these theorems as “The Law of Conservation of Information.” Dembski’s critics have accused his applications as being too modified to be associated with the Law of Conservation of Information. There is no dispute that Dembski applied the modifications. He has modified them to apply to biology.

FROM UNCOMMON DESCENT GLOSSARY:

The Uncommon Descent blog further notes the following in re CSI:

The concept of complex specified information helps us understand the difference between (a) the highly informational, highly contingent aperiodic functional macromolecules of life and (b) regular crystals formed through forces of mechanical necessity, or (c) random polymer strings. In so doing, they identified a very familiar concept — at least to those of us with hardware or software engineering design and development or troubleshooting experience and knowledge. Furthermore, on massive experience, such CSI reliably points to intelligent design when we see it in cases where we independently know the origin story.

What Dembski did with the CSI concept starting in the 1990′s was to:

(i) recognize CSI’s significance as a reliable, empirically observable sign of intelligence,

(ii) point out the general applicability of the concept, and

(iii) provide a probability and information theory based explicitly formal model for quantifying CSI.

(iv) In the current formulation, as at 2005, his metric for CSI, χ (chi), is:

χ = – log2[10 –120 ·ϕS(T)·P(T|H)]

P(T|H)is the probability of being in a given target zone in a search space, on a relevant chance hypothesis, (E.g. Probability of a hand of 13 spades form a shuffled standard deck of cards)

ϕS(T)is a multiplier based on the number of similarly simply and independently specifiable targets (e.g. having hands that are all Hearts, all Diamonds, all Clubs or all Spades)

10–120 is the Seth Lloyd estimate for the maximum number of elementary bit-based operations possible in our observed universe, serving as a reasonable upper limit on the number of search operations.

log2 [ . . . ] converts the modified probability into a measure of information in binary digits, i.e. specified bits. When this value is at least + 1, then we may reasonably infer to the presence of design from the evidence of CSI alone. (For the example being discussed, χ = -361, i.e. The odds of 1 in 635 billions are insufficient to confidently infer to design, on the gamut of the universe as a whole. But, on the gamut of a card game here on Earth, that would be a very different story.) http://www.uncommondescent.com/glossary/

FSCI — “functionally specified complex information” (or, “function-specifying complex information” or — rarely — “functionally complex, specified information” [FCSI])) is a commonplace in engineered systems: complex functional entities that are based on specific target-zone configurations and operations of multiple parts with large configuration spaces equivalent to at least 500 – 1,000 bits; i.e. well beyond the Dembski-type universal probability bound.

In the UD context, it is often seen as a descriptive term for a useful subset of CSI first identified by origin of life researchers in the 1970s – 80′s. As Thaxton et al summed up in their 1984 technical work that launched the design theory movement, The Mystery of Life’s Origin:

. . . “order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity.” [TMLO (FTE, 1984), Ch 8, p. 130.]

So, since in the cases of known origin such are invariably the result of design, it is confidently but provisionally inferred that FSCI is a reliable sign of intelligent design. http://www.uncommondescent.com/glossary/

Image | Posted on by | Tagged , , , , , , | Leave a comment

How Sodium Channels Support Intelligent Design Theory

Choanoflagellate, Monosiga brevicollis

A single-celled choanoflagellate, Monosiga brevicollis

Sodium (Na) is an element that often has a plus sign (+) next to it depicting the fact that it has a positive charge.  Each element in the Periodic Table has a definitive set number of protons, neutrons, and electrons.  Elements in their ionic form are not in their most stable form until they complete their electron valence shells, which occurs by either giving up or grabbing onto electrons from other elements.

Sodium has 11 electrons:

  • 2 in its first energy level (1s2)
  • 8 in the second (2s2 2p6)
  • 1 in the outer energy level (3s1)

Sodium is found to be a positively charged element as a result of being short one electron.  But Sodium is not stable with one lone negatively charged electron spinning around in its outermost level, so Na will always give up its outer electron whenever possible. By doing so, energy level 2 becomes the outermost energy level, and exhibits stability with its complete set of 8 electrons.  This electric property assists in making sodium (Na) an important element in neuron activity and nervous systems in life forms.

Nervous systems and their component neuron cells are a key innovation making communication possible across vast distances between cells in the body, sensory perception, behavior, and complex animal brains.

Researchers from the University of Texas at Austin led by Harold Zakon, professor of neurobiology, and Professor David Hillis coauthored a paper along with graduate student Benjamin Liebeskind that was published in PNAS in May 2011.   Zakon notes, “The first nervous systems appeared in jellyfish-like animals six hundred million years ago or so.”  In order for nervous systems to be possible, their precursor sodium channels would have had to been in place prior to the development of jellyfish.  Zakon confirmed, “We have now discovered that sodium channels were around well before nervous systems evolved.”

According to the University of Texas at Austin press release, sodium channels are an integral part of a neuron’s complex machinery. Sodium channels are described to be “like floodgates lodged throughout a neuron’s levee-like cellular membrane. When the channels open, sodium floods through the membrane into the neuron.”  This generates nerve impulses, and from there complex nervous systems can be derived, all because of the seemingly infinite potential electrical applications that can be derived from the positive charge of sodium molecules.

The Univ. of Texas research team discovered the genes for such sodium channels hiding in a primitive single-celled organism, a choanoflagellate.  Choanoflagellates are Eukaryotes, the supposed evolutionary ancestors of multicellular animals like jellyfish and humans. It’s interesting to note that not only are choanoflagellates unicellular, but they have no neurons either.

The press release further states,

Because the sodium channel genes were found in choanoflagellates, the scientists propose that the genes originated not only before the advent of the nervous system, but even before the evolution of multicellularity itself.

Sodium channel genes are complex.  The Univ. of Texas research team illustrates such a gene in their PNAS paper here:

Sodium-Channel Protein

The image above is Figure 1 in the PNAS paper, which is a hypothetical rendition of secondary structure of a sodium-channel protein.  Transmembrane domains at the top (DI–DIV), their component segments (S1–S6), and their connecting loops (in white) are in view. The pore loops (P loop), which dip down into the membrane, form the ion-selectivity filter. The inactivation gate resides on the long loop between DIII/S6 and DIV/S1.  The middle section illustrates how the domains cluster to form the protein and its pore.  The lower section displays the fine structure of one of the domains with the pore loop in the foreground. The black dots on the pore loops in the (Top) and (Bottom) represent the location of the amino acids, which makes up the pore motif.

Monosiga brevicollis

In this image immediately above, the research indicates that the sodium channel protein is highly conserved in that it existed at nearly the highest known taxa level within the Eukarya domain.  In other words, it’s essentially always existed from the very earliest beginning of the domain Eukarya.

In another study of sodium channels published in Physiological Genomics (May 2011),  Swiss researchers reported the same conclusion.  The story was featured in both Science Daily and PhysOrg exclaiming, “Fluid equilibrium in prehistoric organisms sheds light on a turning point in evolution” as the captioned title.

The Swiss team researched how sodium channels help solve the problem for primitive cells that cannot pump sodium out of their membranes effectively.  The inability to pump sodium was an evolutionary roadblock.   Bernard Rossier (Univ. of Lausanne) figured out how the problem was solved.  A certain subunit of a gene for pumping sodium suddenly “appeared” out of nowhere,  and the rest was history.

In humans, the sodium channel protein (ENaC) traverses a cell’s membrane and facilitates the movement of salt into and out of the cell.  ENaC is regulated by the hormone aldosterone.  The Swiss researchers found that ENaC and Na, K-ATPase, an enzyme that also plays a role in pumping and transporting sodium, were in place before the emergence of multi-celled organisms.

When tracing the alpha, beta, and gamma subunits of ENaC back, the Swiss “team found that the beta subunit appeared slightly before the emergence of Metazoans (multicellular animals with differentiated tissues) roughly 750 million years ago.”

Rossier was unsure as to when the emergence appeared.  Dr. Rossier said that although it is possible that the genes for ENaC originated in the common ancestor of eukaryotes and were lost in all branches except the Metazoa and the Excavates, there is another possibility. There could have been a lateral transfer of genes between N. gruberi and a Metazoan ancestor, one that lived between the last common ancestor of all eukaryotes and the first Metazoans.

Phylogenetic tree of ENaC/degenerin

Phylogenetic tree of ENaC/degenerin

While both studies by the Univ. of Texas and Swiss teams use phylogenetic trees to examine the evolution of these highly conserved proteins and enzymes, the fact is clear that these biochemical systems and complex cellular machinery have been highly conserved and present in species from the earliest dawn of the beginning of the Eukarya domain.  This being the case, the fact that these highly complex systems were required for eukaryote evolution to be possible, and existed from the very beginning of Eukarya cells, the evidence is more supportive of Intelligent Design theory than known mechanisms of evolution.

Posted in MOLECULAR BIOLOGY | Leave a comment