Response to Claim That ID Theory Is An Argument from Incredulity

The Contention That Intelligent Design Theory Succumbs To A Logic Fallacy:

It is argued by those who object to the validity of ID Theory that the proposition of design in nature is an argument from ignorance.   There is no validity to this unfounded claim because design in nature is well-established by the work of William Dembski.  For example, here is a database of writings of Dembski: http://designinference.com/dembski-on-intelligent-design/dembski-writings/. Not only are the writings of Dembski peer-reviewed and published, but so are rebuttals that were written in response of his work.  Dembski is the person who coined the phrase Complex Specified Information, and how it is convincing evidence for design in nature.

Informal Fallacy

The Alleged Gap Argument Problem With Irreducible Complexity:

The argument from ignorance allegation against ID Theory is based upon the design-inspired hypothesis championed by Michael Behe, which is known as Irreducible Complexity. It is erroneous to claim ID is based upon an argument from incredulity* because ID Theory makes no appeals to the unobservable, supernatural, paranormal, or anything that is metaphysical or outside the scope of science.  However, the assertion that the Irreducible Complexity hypothesis is a gap argument is a valid objection that does need a closer view to determine if the criticism of irreducible complexity is valid.

An irreducibly complex system is one that (a) the removal of a protein renders the molecular machine inoperable, and (b) the biochemical structure has no stepwise evolutionary pathway.

Here’s how one would set up examination by using gene knockout, reverse engineering, study of homology, and genome sequencing:

I. To CONFIRM Irreducible Complexity:

Show:

1. The molecular machine fails to operate upon the removal of a protein.

AND,

2. The biochemical structure has no evolutionary precursor.

II. To FALSIFY Irreducible Complexity:

Show:

1. The molecular machine still functions upon loss of a protein.

OR,

2. The biochemical structure DOES have an evolutionary pathway.

The 2 qualifiers make falsification easier, and confirmation more difficult.

Those who object to irreducible complexity often raise the argument that the irreducible complexity hypothesis is based upon there being gaps or negative evidence.   Such critics claim that irreducible complexity is not based upon affirmative evidence, but on a lack of evidence, and as such, irreducible complexity is a gap argument, also known as an argument from ignorance.  However, this assertion that irreducible complexity is nothing other than a gap argument is false.

According to the definition of irreducible complexity, the hypothesis can be falsified EITHER way, by (a) demonstrating the biochemical system still performs its original function upon the removal of any gene that makes up its parts, or (b) showing that there are missing mutations that were skipped, i.e., there is no stepwise evolutionary pathway or precursor.  Irreducible complexity can still be falsified even if no evolutionary precursor is found because of the functionality qualifier.   In other words, the mere fact that there is no stepwise evolutionary pathway does not automatically mean that the system is irreducibly complex.  To confirm irreducible complexity, BOTH qualifiers must be satisfied.  But, it only takes one of the qualifiers to falsify irreducible complexity.  As such, the claim that irreducible complexity is fatally tied to a gap argument is without merit.

It is true that there very much exists a legitimate logic fallacy known as proving a negative.  The question is whether there is such a thing as proving nonexistence. It’s a logic fallacy. While it is true that it is impossible to prove a negative or provide negative proof, it is very much logically valid to limit a search to find a target within a reasonable search space and obtain a quantity of zero as a scientifically valid answer.

Solving a logic problem might be a challenged, but there is a methodical procedure that will lead to success.  The cure to the logic fallacy, is to correct the error and solve the problem.

Solving a logic problem might be a challenge, but there is a methodical procedure that will lead to success. The cure to a logic fallacy, is to simply correct the error and solve the problem.

The reason why the irreducible complexity hypothesis is logically valid is because there is no attempt to base the prediction that certain biochemical molecular machinery are irreducibly complex based upon absence of evidenceIf this were so, then the critics would be correct.  But, this is not the case.  Instead, the irreducible complexity hypothesis requires research, such as various procedures in molecular biology as (a) gene knockout, (b) reverse engineering, (c) examining homologous systems, and (d) sequencing the genome of the biochemical structure.  The gene knockout procedure was used by Scott Minnich in 2004-2005 to show that the removal of any of the proteins of a bacterial flagellum will render that bacteria incapable of motility (can’t swim anymore).  Michael Behe also mentions (e) yet another way as to how testing irreducible complexity using gene knockout procedure might falsify the hypothesis here.

When the hypothesis of irreducible complexity is tested in the lab using any of the procedures directly noted above, an obvious thorough investigation is conducted that demonstrates evidence of absence. There is a huge difference between absence of evidence and evidence of absence.  One is a logic fallacy while the other is an empirically generated result, a scientifically valid quantity that is concluded upon thorough examination.  So, depending upon the analysis, you can prove a negative.

Evidence of Absence

Here’s an excellent example as to why irreducible complexity is logically valid, and not an argument from ignorance.  If I were to ask you if you had change for a dollar, you could say, “Sorry, I don’t have any change.” If you make a diligent search in your pockets to discover there are indeed no coins anywhere to be found on your person, then you have affirmatively proven a negative that your pockets were empty of any loose change. Confirming that you had no change in your pockets was not an argument from ignorance because you conducted a thorough examination and found it to be an affirmatively true statement.

The term, irreducible complexity, was coined by Michael Behe in his book, “Darwin’s Black Box” (1996).  In that book, Behe predicted that certain biochemical systems would be found to be irreducibly complex.  Those specific systems were (a) the bacterial flagellum, (b) cilium, (c) blood-clotting cascade, and (d) immune system.   It’s now 2013 at the time of writing this essay.  For 17 years, the research has been conducted, and the flagellum has been shown to be irreducibly complex. It’s been thoroughly researched, reverse engineered, and its genome sequenced. It is a scientific fact that the flagellum has no precursor. That’s not a guess. It is not stated as ignorance from taking some wild uneducated guess. It’s not a tossing one’s hands up in the air saying, “I give up.” It is a scientific conclusion based upon thorough examination.

Logic Fallacies

Logic fallacies, such as circular reasoning, argument from ignorance, red herring, strawman argument, special pleading, and others are based upon philosophy and rhetoric. While they might lend to the merit of a scientific conclusion, it is up to the peer-review process to determine the validity of a scientific hypothesis.

Again, if you were asked how much change do you have in your pockets. You can put your hand in your pocket, look to see how many coins are there. If there is no loose change, it is NOT an argument from ignorance to state, “Sorry, I don’t have any spare change.” You didn’t guess. You stuck your hands in your pockets and looked, and scientifically deduced the quantity to be zero. The same is true with irreducible complexity. After the search has taken place, the prediction the biochemical system is irreducibly complex is upheld and verified. Hence, there is no argument from ignorance.

The accusation that irreducible complexity is an argument from ignorance essentially suggests a surrender and abandonment of ever attempting to empirically determine whether the prediction is scientifically correct.  It’s absurd for anyone to suggest that ID scientists are not interested in finding Darwinian mechanisms responsible for the evolution of an irreducible complex biochemical structure. If you lost money in your wallet, it would be ridiculous for someone to accuse you of rejecting any interest in recovering your money. That’s essentially what is being claimed when someone draws the argument from ignorance accusation. The fact is you know you did look (you might have turned your house upside down looking), and know for a fact that the money is missing. It doesn’t mean that you might still find it (the premise is still falsifiable). But, a thorough examination took place, and you determined the money is gone.

Consider Mysterious Roving Rocks:

On a sun-scorched plateau known as Racetrack Playa in Death Valley, California, rocks of all sizes glide across the desert floor.  Some of the rocks accompany each other in pairs, which creates parallel trails even when turning corners so that the tracks left behind resemble those of an automobile.  Other rocks travel solo the distance of hundreds of meters back and forth along the same track.  Sometimes these paths lead to its stone vehicle, while other trails lead to nowhere, as the marking instrument has vanished.

Roving Rocks

Some of these rocks weigh several hundred pounds. That makes the question: “How do they move?” a very challenging one.  The truth is no one knows just exactly how these rocks move.   No one has ever seen them in motion.  So, how is this phenomenon explained?

A few people have reported seeing Racetrack Playa covered by a thin layer of ice. One idea is that water freezes around the rocks and then wind, blowing across the top of the ice, drags the ice sheet with its embedded rocks across the surface of the playa.  Some researchers have found highly congruent trails on multiple rocks that strongly support this movement theory.  Other suggest wind to be the energy source behind the movement of the roving rocks.

The point is that anyone’s guess, prediction, speculation is as good as that of anyone else.  All these predictions are testable and falsifiable by simply setting up instrumentation to monitor the movements of the rocks.  Are any of these predictions an argument from ignorance?  No.  As long as the inquisitive examiner makes an effort to determine the answer, this is a perfectly valid scientific endeavor. 

The argument from ignorance would only apply when someone gives up, and just draws a conclusion without any further attempt to gain empirical data.  It is not a logic fallacy in and of itself on the sole basis that there is a gap of knowledge as to how the rocks moved from Point A to Point B.  The only logic fallacy would be to draw a conclusion while resisting further examination.  Such is not the case with irreducible complexity.  The hypothesis has endured 17 years of laboratory research by molecular biologists, and the research continues to this very day.

The Logic Fallacy Has No Bearing On Falsifiability:

Here’s yet another example as to why irreducible complexity is scientifically falsifiable, and therefore not an argument from ignorance logic fallacy.  If someone was correct in asserting the argument from incredulity fallacy, then they have eliminated all science. Newton’s law of gravity was an argument from ignorance because he didn’t know anything more than what he had discovered. It was later falsified by Einstein. So, according to this flawed logic, Einstein’s theory of relativity is an argument from ignorance because there might be someone in the future who will falsify it with a Theory of Everything.

Whether a hypothesis passes the Argument of Ignorance logic criterion, or not, the argument is an entirely philosophical one, much like how a mathematical argument might be asserted.  If the argument from ignorance were applied in peer-review to all science papers submitted for publication, the science journals would be near empty of any documents to reference.  Science is not based upon philosophical objections and arguments.  Science is based upon the definition of science, which is observation, falsifiable hypothesis, experimentation, results and conclusion. It is the fact that these methodical elements are in place which makes science based upon what it is supposed to be, and that is empiricism.

Scientific Method

Whether a scientific hypothesis is falsifiable is not affected by philosophical arguments based upon logic fallacies.   Irreducible Complexity is very much falsifiable based upon its definition.  The argument from ignorance only attacks the significance of the results and conclusion of research in irreducible complexity; it doesn’t deter irreducible complexity from being falsifiable.  In fact, the argument from ignorance objection actually emphasizes just the opposite, that irreducible complexity might be falsified tomorrow because it inherently argues the optimism that its just a matter of time that an evolutionary pathway will be discovered in future research.  This is not a bad thing; the fact that irreducible complexity is falsifiable is a good thing.  That testability and obtainable goalpost is what you want in a scientific hypothesis.

ID Theory Is Much More Than Just The One Hypothesis of Irreducible Complexity:

ID Theory is also an applied science as well, click here for examples in biomimicry.  Intelligent Design is also an applied science in areas of bioengineering, nanotechnology, selective breeding, and bioinformatics, to name a few applications.  ID Theory is a study of information and design in nature.  And, there are design-inspired conjectures as to where the source of information originates, such as the rapidly growing field of quantum biology, Natural Genetic Engineering, and front-loading via panspermia.

In conclusion, the prediction that there are certain biochemical systems that exist of which are irreducibly complex is not a gaps argument.  The definition of irreducible complexity is stated above, and it is very much a testable, repeatable, and falsifiable hypothesis.  It is a prediction that certain molecular machinery will not operate upon the removal of a part, and have no stepwise evolutionary precursor.  This was predicted by Behe 17 years ago, and still remains true, as evidenced by the bacteria flagellum, as an example.

*  Even though these two are technically distinguishable logic fallacies, the argument from incredulity is so similar to the argument from ignorance that for purposes of discussion I treat the terms as synonymous.

Posted in LOGIC FALLACIES | Tagged , , , , | 1 Comment

RESPONSE TO THE MARK PERAKH CRITIQUE, “THERE IS A FREE LUNCH AFTER ALL: WILLIAM DEMBSKI’S WRONG ANSWERS TO IRRELEVANT QUESTIONS”

I. INTRODUCTION

This essay is a reply to chapter 11 of the book authored by Mark Perakh entitled, Why Intelligent Design Fails: A Scientific Critique of the New Creationism (2004).  The chapter can be review here.  Chapter 11, “There is a Free Lunch After All: William Dembski’s Wrong Answers to Irrelevant Questions,” is a rebuttal to the book written by William Dembski entitled, No Free Lunch (2002).  Mark Perakh’s also authored another anti-ID book, “Unintelligent Design.”  The Discovery Institute replied to Perakh’s work here.

The book by William Dembski, No Free Lunch (2002) is a sequel to his classic, The Design Inference (1998). The Design Inference used mathematical theorems to define design in terms of a chance and statistical improbability.  In The Design Inference, Dembski explains complexity, and demonstrated that when complex information is specified, it determines design.  Simply put, Complex Specified Information (CSI) = design.  It’s CSI that is the technical term that mathematicians, information theorists, and ID scientists can work with to determine whether some phenomenon or complex pattern is designed.

One of the most important contributors to ID Theory is American mathematician Claude Shannon, who is considered to be the father of Information Theory. Essentially, ID Theory is a sub-theory of Information Theory in the field of Bioinformatics. This is one of Dembski’s areas of expertise.

Claude Shannon is seen here with Theseus, his magnetic mouse. The mouse was designed to search through the corridors until it found the target.

Claude Shannon pioneered the foundations for modern Information Theory.  His identifying units of information that can be quantified and applied in fields such as computer science is still called Shannon Information to this day.

Shannon invented a mouse that was programmed to navigate through a maze to search for a target, concepts that are integral to Dembski’s mathematical theorems of which are based upon Information Theory.  Once the mouse solved the maze it could be placed anywhere it had been before and use its prior experience to go directly to the target. If placed in unfamiliar territory, the mouse would continue the search until it reached a known location and then proceed to the target.  The ability of the device to add new knowledge to its memory is believed to be the first occurrence of artificial learning.

In 1950 Shannon published a paper on computer chess entitled Programming a Computer for Playing Chess. It describes how a machine or computer could be made to play a reasonable game of chess. His process for having the computer decide on which move to make is a minimax procedure, based on an evaluation function of a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position. Material was counted according to the usual relative chess piece relative value. (http://en.wikipedia.org/wiki/Claude_Shannon).

Shannon’s work obviously involved applying what he knew at the time for the computer program to scan all possibilities for any given configuration on the chess board to determine the best optimum move to make.  As you will see, this application of a search within any given phase space that might occur during the course of the game for a target, which is one fitness function among many as characterized in computer chess is exactly what the debate is about with Dembski’s No Free Lunch (NFL) Theorems.

When Robert Deyes wrote a review on Stephen Meyer’s “Signature In The Cell,” he noted “When talking about ‘information’ and its relevance to biological design, Intelligent Design theorists have a particular definition in mind.”  Stephen Meyer explained in “Signature In The Cell” that information is: “the attribute inherent in and communicated by alternative sequences or arrangements of something that produce specific effects” (p.86).

When Shannon unveiled his theory for quantifying information it included several axioms, one of which is information is inversely proportional to uncertainty. Similarly, design can be contrasted to chance.

II. COMPLEX SPECIFIED INFORMATION (CSI):

CSI is based upon the theorem:

sp(E) and SP(E)  D(E)

When a small probably (SP) event (E) is complex, and

SP(E) = [P (E|I) < the Universal Probability Bound]. Or, in English, we know an event E is a small probably event when the probability of event E given I is less than the Universal Probability Bound. I = All relevant side information and all stochastic hypotheses. This is all in Dembski’s book, The Design Inference.

An event E is specified by a pattern independent of E, or expressed mathematically: sp(E). Upper case letters SP(E) are the small probability event we are attempting to determine is CSI, or designed. Lower case letters sp(E) are a prediction that we will discover the SP(E). Therefore, if sp(E) and SP(E) then  D(E). D(E) means the event E is not only small probability, but we can conclude it is designed.

Dembski’s Universal Probability Bound = 0.5 x 10-150, or 0.5 times 10 to the exponent negative 150 power. This is the magic number when one can scientifically be justified to invoke design. It’s been said that using Dembski’s formula, the probability that Dembski states must be matched in order to ascribe design is to announce in advance before dealing that you are going to be dealt 24 Royal Flushes in a row, and then the event plays out just exactly as the advance forecast. In other words, just as intelligence might be entirely illusory, so likewise CSI is nothing other than a mathematical ratio that might not have anything in the world to do with actual design.

The probability of dealing a Royal flush given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement is 649,739 to 1. According to Dembski, if someone were to be dealt a Royal Flush 24 times in a row upon the advance announcement predicting such a happening would take place, his contentions would be that it was so improbable that someone cheated, or “design” would have had to been involved.

The probability of being dealt a Royal flush given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement is 649,739 to 1.

I’m oversimplifying CSI just for the sake of making this point that we have already imposed upon the term “design” a technical definition that requires no intelligence or design as we understand the everyday normal use of the words.  What’s important is that just as improbable as it is to be dealt a Royal Flush, so likewise the level of difficulty natural selection is up against to produce what appears to be designed in nature. And, when CSI is observed in nature, which occurs occasionally, then that not only confirms ID predictions, and defies Darwinian gradualism, but also tips a scientist a clue that such might be evidence of additional ID-related mechanisms at work.

It is true that William Dembski’s theorems are based upon an assumption that we can quantify everything in the universe; no argument there. But, he only used that logic to derive his Universal Probability Bound, which is nearly an infinitesimally small number: 0.5 x 10-150, or 0.5 times 10 to the exponent negative 150 power. Do you not think that when a probability is this low that it is a safe bet to invoke corruption of natural processes by an intelligent agency? The number is a useful number.

I wrote two essays on CSI to provide a better understanding of specified complexity introduced in Dembski’s book, The Design Inference.  In this book, Dembski introduces and expands on the meaning of CSI, and then proceeds to present reasoning as to why CSI infers design.  The first essay I wrote on CSI here is an elementary introduction to the overall concept.  I wrote a second essay here that provides a more advances discussion on CSI.

CSI does show up in nature. That’s the whole point of the No Free Lunch Principle is that there is no way by which evolution can take credit for the occurrences when CSI shows up in nature.

III. NO FREE LUNCH

Basically, the book, “No Free Lunch” is a sequel to the earlier work, The Design Inference. While we get more calculations that confirm and verify Dembski’s earlier work, we also get new assertions made by Dembski. It is very important to note that ID Theory is based upon CSI that is established in The Design Inference. The main benefit of the second book, “No Free Lunch,” is that it further validates and verifies CSI, which was established in The Design Inference.  The importance of this fact cannot be overemphasized. Additionally, “No Free Lunch” further confirms the validity of the assertion that design in inseparable from intelligence.

Before “No Free Lunch,” there was little effort demonstrating that CSI is connected to intelligence. That’s a problem because CSI = design. So, if CSI = design, it should be demonstrable that CSI correlates and is directly proportional to intelligence. This is the thesis of what the book, “No Free Lunch” sets out to do. If “No Free Lunch” fails to successfully support the thesis that CSI correlates to intelligence, that would not necessarily impair ID Theory, but if Dembski succeeds, then it would all the more lend credibility to ID Theory and certainly all of Dembski’s work as well.

IV. PERAKH’S ARGUMENT

The outline of Perakh’s critique of Dembski’s No Free Lunch theorems is as follows:

1.    Methinks It Is like a Weasel—Again
2.    Is Specified Complexity Smuggled into Evolutionary Algorithms?
3.    Targetless Evolutionary Algorithms
4.    The No Free Lunch Theorems
5.    The NFL Theorems—Still with No Mathematics
6.    The No Free Lunch Theorems—A Little Mathematics
7.    The Displacement Problem
8.    The Irrelevance of the NFL Theorems
9.    The Displacement “Problem”

1.  METHINKS IT IS LIKE A WEASEL – AGAIN

One common demonstration to help people understand how CSI works is to take a letter sequence. This can be done with anything, but the common example is this pattern:

METHINKS•IT•IS•LIKE•A•WEASEL

This letter arrangement is used the most often to describe CSI because the math has already been worked out. The bullets represent spaces. There are 27 possibilities at each location in a symbol string 28 characters in length. If natural selection were entirely random it would take 1 x 1040 (that’s 10 to the 40th exponent, or 1 with 40 zeroes to get to the decimal point). It’s a small probability. However, natural selection (NS) is smarter than that, and Richard Dawkins has shown how NS achieves similar odds in nature in an impressive 43 attempts to get the answer correct, as Dembski notes here.

In this example, the probability was only 1 x 1040. CSI is an even much more higher number than that. If you take a pattern or model, such as METHINKS•IT•IS•LIKE•A•WEASEL, and you keep adding the information, you soon reach probabilities that are within the domain of CSI.

Dembski’s explanation to the target sequence of METHINKS•IT•IS•LIKE•A•WEASEL is as follows:

“Thus, in place of 1040 tries on average for pure chance to produce the target sequence, by employing the Darwinian mechanism it now takes on average less than 100 tries to produce it. In short, a search effectively impossible for pure chance becomes eminently feasible for the Darwinian mechanism.

“So does Dawkins’s evolutionary algorithm demonstrate the power of the Darwinian mechanism to create biological information? No. Clearly, the algorithm was stacked to produce the outcome Dawkins was after. Indeed, because the algorithm was constantly gauging the degree of difference between the current sequence from the target sequence, the very thing that the algorithm was supposed to create (i.e., the target sequence METHINKS•IT•IS•LIKE•A•WEASEL) was in fact smuggled into the algorithm from the start. The Darwinian mechanism, if it is to possess the power to create biological information, cannot merely veil and then unveil existing information. Rather, it must create novel information from scratch. Clearly, Dawkins’s algorithm does nothing of the sort.

“Ironically, though Dawkins uses a targeted search to illustrate the power of the Darwinian mechanism, he denies that this mechanism, as it operates in biological evolution (and thus outside a computer simulation), constitutes a targeted search. Thus, after giving his METHINKS•IT•IS•LIKE•A•WEASEL illustration, he immediately adds: “Life isn’t like that.  Evolution has no long-term goal. There is no long-distant target, no final perfection to serve as a criterion for selection.” [Footnote] Dawkins here fails to distinguish two equally valid and relevant ways of understanding targets: (i) targets as humanly constructed patterns that we arbitrarily impose on things in light of our needs and interests and (ii) targets as patterns that exist independently of us and therefore regardless of our needs and interests. In other words, targets can be extrinsic (i.e., imposed on things from outside) or intrinsic (i.e., inherent in things as such).

“In the field of evolutionary computing (to which Dawkins’s METHINKS•IT•IS•LIKE•A•WEASEL example belongs), targets are given extrinsically by programmers who attempt to solve problems of their choice and preference. Yet in biology, living forms have come about without our choice or preference. No human has imposed biological targets on nature. But the fact that things can be alive and functional in only certain ways and not in others indicates that nature sets her own targets. The targets of biology, we might say, are “natural kinds” (to borrow a term from philosophy). There are only so many ways that matter can be configured to be alive and, once alive, only so many ways it can be configured to serve different biological functions. Most of the ways open to evolution (chemical as well as biological evolution) are dead ends. Evolution may therefore be characterized as the search for alternative “live ends.” In other words, viability and functionality, by facilitating survival and reproduction, set the targets of evolutionary biology. Evolution, despite Dawkins’s denials, is therefore a targeted search after all.” (http://evoinfo.org/papers/ConsInfo_NoN.pdf).

Weasel Graph

This graph was presented by a blogger who ran just one run of the weasel algorithm for Fitness of “best match” for n = 100 and u = 0.2.

Perakh doesn’t make any argument here, but introduces the METHINKS IT IS LIKE A WEASEL configuration here to be the initial focus of what is to follow.  The only derogatory comment he makes with Dembski is to charge that Dembski is “inconsistent.”  But, there’s no excuse to accuse Dembski of any contradiction. Perakh  states himself, “Evolutionary algorithms may be both targeted and targetless” (Page 2).  He also admits that Dembski was correct in that “Searching for a target IS teleological” (Page 2).  Yet, Perakh blames Dembski to be at fault for simply noting the teleological inference, and falsely accuses Dembski of contradicting himself on this issue when there is no contradiction.  There’s no excuse for Perakh to accuse Dembksi is being inconsistent here when all he did was acknowledge that teleology should be noted and taken into account when discussing the subject.

Perakh also states on page 3 that Dembski lamented over the observation made by Dawkins.  This is unfounded rhetoric and ad hominem that does nothing to support Perakh’s claims.  There is no basis to assert or benefit to gain by suggesting that Dembski was emotionally dismayed because of the observations made by Dawkins.  The issue is a talking point for discussion.

Perakh correctly represents the fact, “While the meaningful sequence METHINKSITISLIKEAWEASEL is both complex and specified, a sequence NDEIRUABFDMOJHRINKE of the same length, which is gibberish, is complex but not specified” (page 4).  And, then he correctly reasons the following,

“If, though, the target sequence is meaningless, then, according to the above quotation from Behe, it possesses no SC. If the target phrase possesses no SC, then obviously no SC had to be “smuggled” into the algorithm.” Hence, if we follow Dembski’s ideas consistently, we have to conclude that the same algorithm “smuggles” SC if the target is meaningful but does not smuggle it if the target is gibberish.” (Emphasis in original, page 4)

Perakh then arrives at the illogical conclusion that such reasoning is “preposterous because algorithms are indifferent to the distinction between meaningful and gibberish targets.”  Perakh is correct that algorithms are indifferent to teleology and making distinctions.  But, he has no basis to criticize Dembski on this point.

Completed Jigsaw Puzzle

This 40-piece jigsaw puzzle is more complex than the Weasel problem that consists of only the letters M, E, T, H, I, N, K, S, L, A, W, S, plus a space.

In the Weasel problem submitted by Richard Dawkins, the solution (target) was provided to the computer up front.  The solution to the puzzle was embedded in the letters provided to the computer to arrange into an intelligible sentence.  The same analogy applies to a jigsaw puzzle.  There is only one end result picture the puzzle pieces can be assembled to achieve.  The information of the picture is embedded in the pieces and not lost from merely cutting the image picture into pieces.  One can still solve the puzzle if they are blinded up front from seeing what the target looks like.   There is only one solution to the Weasel problem, so it is a matter of deduction, and not a blind search as Perakh maintains.   The task the Weasel algorithm had to perform was to unscramble the letters and rearrange them in the correct sequence.

The METHINKS•IT•IS•LIKE•A•WEASEL algorithm was given up front to be the fitness function, and intentionally designed CSI to begin with.  It’s a matter of the definition of specified complexity (SC).  If information is both complex and specified, then it is CSI by definition, and CSI = SC.  They’re two ways to express the same identical concept.  Perakh is correct.  The algorithm has nothing in and of itself to do with the specified complexity of the target phrase.  The reason why a target phrase is specified complexity is because the complex pattern was specified up front to be the target in the first place, all of which was independent of the algorithm.  So, so far, Perakh has not made a point of argument yet.

Dembski makes subsequent comments about the weasel math here and here.

2.  IS SPECIFIED COMPLEXITY SMUGGLED INTO EVOLUTIONARY ALGORITHMS?

Perakh asserts on page 4 that “Dembski’s modified algorithm is as teleological as Dawkins’s original algorithm.”  So what?  This is a pointless red herring that Perakh continues work for no benefit or support of any argument against Dembski.  It’s essentially a non-argument.  All sides: Dembski, Dawkins, and Perakh himself have conceded up front that discussion of this topic is difficult without stumbling over anthropomorphism.  Dembski noted it up front, which is commendable; but somehow Perakh wrongfully tags this to be some fallacy of which Dembski is committing.

Personifying the algorithms to have teleological behavior was a fallacy noted up front.  So, there’s no basis for Perakh to allege that Dembski is somehow misapplying any logic in his discussion.  The point was acknowledged by all participants in the discussion from the very beginning.  Perakh is not inserting anything new here, but just being an annoyance to raise a point that was already noted.  Also, Perakh has yet to actually raise any actual argument yet.

Dembksi wrote in No Free Lunch (194–196) that evolutionary algorithms do not generate CSI, but can only “smuggle” it from a “higher order phase space.”  CSI is also called specified complexity (SC).   Perakh makes the ridiculous claim on page 4 that this point is irrelevant to biological evolution, but offers no reasoning as to why.  To support his challenge against Dembski, Perakh states, “since biological evolution has no long-term target, it requires no injection of SC.”

The question is whether it’s possible a biological algorithm caused the existence of the CSI.  Dembski says yes, and his theorems established in The Design Inference are enough to satisfy the claim.  But, Perakh is arguing here that the genetic algorithm is capable of generating the CSI.  Perakh states that natural selection is unaware of its result (page 4), which is true.  Then he says Dembski must, “offer evidence that extraneous information must be injected into the natural selection algorithm apart from that supplied by the fitness functions that arise naturally in the biosphere.”  Dembski shows this in “Life’s Conservation Law – Why Darwinian Evolution Cannot Create Biological Information.”

3.  TARGETLESS EVOLUTIONARY ALGORITHMS

Biomorphs

Biomorphs

Next, Perakh raises the example made by Richard Dawkins in “The Blind Watchmaker” in which Dawkins uses what he calls “biomorphs” as an argument against artificial selection.  While Dawkins exhibits an imaginative jab to ridicule ID Theory, raising the subject again by Perakh is pointless.  Dawkins used the illustration of biomorphs to contrast the difference between natural selection as opposed to artificial selection upon which ID Theory is based upon.  It’s an excellent example.  I commend Dawkins on coming up with these biomorph algorithms.  They are very unique and original.  You can see color examples of them here.

The biomorphs created by Dawkins are actually different intersecting lines of various degrees of complexity, and resemble Rorschach figures often used by psychologists and psychiatrists.  Biomorphs depict both inanimate objects like a cradle and lamp, plus biological forms such as a scorpion, spider, and bat.   It is an entire departure from evolution as it is impossible to make any logical connection how a fox would evolve into a lunar lander, or how a tree frog would morph into a precision balance scale.  Since the idea is a departure from evolutionary logic of any kind, because no rationale to connect any of the forms is provided, it would be seemingly impossible to devise an algorithm that fits biomorphs.

Essentially, Dawkins used these biomorphs to propose a metaphysical conjecture.  The intent of Dawkins is to suggest ID Theory is a metaphysical contemplation while natural selection is entirely logical reality.  Dawkins explains the point in raising the idea of biomorphs is:

“… when we are prevented from making a journey in reality, the imagination is not a bad substitute. For those, like me, who are not mathematicians, the computer can be a powerful friend to the imagination. Like mathematics, it doesn’t only stretch the imagination. It also disciplines and controls it.”

Biomorphs submitted by Richard Dawkins from The Blind Watchmaker, figure 5 p. 61

This is an excellent point and well-taken. The idea Dawkins had to reference biomorphs in the discussion was brilliant.  Biomorphs are an excellent means to assist in helping someone distinguish the difference between natural selection verses artificial selection.  This is exactly the same point design theorists make when protesting the personification of natural selection to achieve reality-defying accomplishments.  What we can conclude is that scientists, regardless of whether they accept or reject ID Theory, dislike the invention of fiction to fill in unknown gaps of phenomena.

In the case of ID Theory, yes the theory of intelligent design is based upon artificial selection, just as Dawkins notes with his biomorphs.  But, unlike biomorphs and the claim of Dawkins, ID Theory still is based upon fully natural scientific conjectures.

4.  THE NO FREE LUNCH THEOREMS

In this section of the argument, Perakh doesn’t provide an argument.  He’s more interested in talking about his hobby, which is mountain climbing.

The premise offered by Dembski that Perakh seeks to refute is the statement in No Free Lunch, which reads, “The No Free Lunch theorems show that for evolutionary algorithms to output CSI they had first to receive a prior input of CSI.” (No Free Lunch, page 223).  Somehow, Perakh believes he can prove Dembski’s theorems false.  In order to accomplish the task, one would have to analyze Dembski’s theorems.  First of all, Dembski’s theorems take into account all the possible factors and variables that might apply, as opposed to the algorithms only.  Perakh doesn’t make anything close to such an evaluation.  Instead, Perakh does nothing but use the mountain climbing analogy to demonstrate we cannot know just exactly what algorithm natural selection will promote as opposed to which algorithms natural selection will overlook.  This fact is a given up front and not in dispute.  As such, Perakh presents a non-argument here that does nothing to challenge Dembski’s theorems in the slightest trace of a bit.  Perakh doesn’t even discuss the theorems, let alone refute them.

The whole idea here of the No Free Lunch theorems is to demonstrate how CSI is smuggled across many generations, and then shows up visibly in a phenotype of a life form countless generations later.  Many factors must be contemplated in this process including evolutionary algorithms.   Dembksi’s book, No Free Lunch, is about demonstrating how CSI is smuggled through, which is the whole point as to where the book’s name is derived.  If CSI is not manufactured by evolutionary processes, including genetic algorithms, then it had been displaced from the time it was initially front-loaded.  Hence, there’s no free lunch.

Front-Loading could be achieved several ways, one of which is via panspermia.

But, Perakh makes no attempt to discuss the theorems in this section, much less refute Dembski’s work.  I’ll discuss front-loading in the Conclusion.

5.  THE NO FREE LUNCH THEOREMS—STILL WITH NO MATHEMATICS

Perakh finally makes a valid point here.  He highlights a weakness in Dembski’s book that the calculations provided do little to account for an average performance of multiple algorithms in operation at the same time.

Referencing his mountain climbing analogy from the previous section, Perakh explains the fitness function is the height of peaks in a specific mountainous region.  In his example he designates the target of the search to be a specific peak P of height 6,000 meters above sea level.

“In this case the number n of iterations required to reach the predefined height of 6,000 meters may be chosen as the performance measure.  Then algorithm a1 performs better than algorithm a2 if a1 converges on the target in fewer steps than a2. If two algorithms generated the same sample after m iterations, then they would have found the target—peak P—after the same number n of iterations. The first NFL theorem tells us that the average probabilities of reaching peak P in m steps are the same for any two algorithms” (Emphasis in the original, page 10).

Since any two algorithms will have an equal average performance when all possible fitness landscapes are included, then the average number n of iterations required to locate the target is the same for any two algorithms if the averaging is done over all possible mountainous landscapes.

Therefore, Perakh concludes the no free lunch theorems of Dembski do not say anything  about the relative performance of algorithms a2 and a1 on a specific landscape. On a specific landscape, either a2 or a1 may happen to be much better than its competitor.  Perakh goes on to apply the same logic in a targetless context as well.

These points Perakh raises are well taken.  Subsequent to the writing of Perakh’s book in 2004, Dembski ultimately provided the supplemental math to cure these issues in his paper entitled, “Searching Large Spaces: Displacement and the No Free Lunch Regress” (March 2005), which is available for review here.  It should also be noted that Perakh concludes this section of chapter 11 by admitting that the No Free Lunch theorems “are certainly valid for evolutionary algorithms.”  If that is so, then there is no dispute.

6.  THE NO FREE LUNCH THEOREMS—A LITTLE MATHEMATICS

It is noted that Dembski’s first no free lunch theorem is correct. It is based upon any given algorithm performed m times. The result will be a time-ordered sample set d comprised of m measured values of f within the range Y. Let P be the conditional probability of having obtained a given sample after m iterations, for given f, Y, and m.

Then, the first equation is

Mathwhen a1 or a2 are two different algorithms.

Perakh emphasizes this summation is performed over “all possible fitness functions.”   In other words, Dembski’s first theorem proves that when algorithms are averaged over all possible fitness landscapes the results of a given search are the same for any pair of algorithms.  This is the most basic of Dembski’s theorems, but the most limited for application purposes.

The second equation applies the first one for time-dependent landscapes.  Perakh notes several difficulties in the no free lunch theorems including the fact that evolution is a “coevolutionary” process.  In other words, Dembski’s theorems apply to ecosystems that involve a set of genomes all searching for the same fixed fitness function.  But, Perakh argues that in the real biological world, the search space changes after each new generation.  The genome of any given population slightly evolves from one generation to the next.  Hence, the search space that the genomes are searching is modified with each new generation.

Chess

The game of Chess is played one successive procedural (evolutionary) step at a time. With each successive move (mutation) on the chessboard, the chess-playing algorithm must search for a different and new board configuration as to the next move the computer program (natural selection) should select for.

The no free lunch models discussed here are comparable to the computer chess game mentioned above.   With each slight modification (Darwinian gradualism) in the step by step process of the chess game, the pieces end up in different locations on the chessboard so that the search process starts all over again with a new and different search for a new target than the preceding search.

There is one optimum move that is better than others, which might be a preferred target.  Any other reasonable move on the chessboard is a fitness function.  But, the problem in evolution is not as clear. Natural selection is not only blind, and therefore conducts a blind search, but does not know what the target should be either.

Where Perakh is leading to with this foundation is he is going to suggest in the next section that given a target up front, like the chess solving algorithm has, there might be enough information  in the description of the target itself to assist the algorithm to succeed in at least locating a fitness function.  Whether Perakh is correct or not can be tested by applying the math.

As aforementioned, subsequent to the publication of Perakh’s book, Dembski ultimately provided the supplemental math to cure these issues in his paper entitled, “Searching Large Spaces: Displacement and the No Free Lunch Regress” (March 2005), which is available for review here.  It should also be noted that Perakh concludes this section of the chapter by admitting that the No Free Lunch theorem “are certainly valid for evolutionary algorithms.”

7.  THE DISPLACEMENT PROBLEM

As already mentioned, the no free lunch theorems show that for evolutionary algorithms to output CSI they first received a prior input of CSI.  There’s a term to describe this. It’s called displacement.  Dembski wrote in a paper entitled “Evolution’s Logic of Credulity:
An Unfettered Response to Allen Orr” (2002) the key point of writing No Free Lunch concerns displacement.  The “NFL theorems merely exemplify one instance not the general case.”

Dembski continues to explain displacement,

“The basic idea behind displacement is this: Suppose you need to search a space of possibilities. The space is so large and the possibilities individually so improbable that an exhaustive search is not feasible and a random search is highly unlikely to conclude the search successfully. As a consequence, you need some constraints on the search – some information to help guide the search to a solution (think of an Easter egg hunt where you either have to go it cold or where someone guides you by saying ‘warm’ and ‘warmer’). All such information that assists your search, however, resides in a search space of its own – an informational space. So the search of the original space gets displaced to a search of an informational space in which the crucial information that constrains the search of the original space resides” (Emphasis in the original, http://tinyurl.com/b3vhkt4).

8.  THE IRRELEVANCE OF THE NFL THEOREMS

In the conclusion of his paper, Searching Large Spaces: Displacement and the No Free Lunch Regress (2005), Dembski writes:

“To appreciate the significance of the No Free Lunch Regress in this latter sense, consider the case of evolutionary biology. Evolutionary biology holds that various (stochastic) evolutionary mechanisms operating in nature facilitate the formation of biological structures and functions. These include preeminently the Darwinian mechanism of natural selection and random variation, but also others (e.g., genetic drift, lateral gene transfer, and symbiogenesis). There is a growing debate whether the mechanisms currently proposed by evolutionary biology are adequate to account for biological structures and functions (see, for example, Depew and Weber 1995, Behe 1996, and Dembski and Ruse 2004). Suppose they are. Suppose the evolutionary searches taking place in the biological world are highly effective assisted searches qua stochastic mechanisms that successfully locate biological structures and functions. Regardless, that success says nothing about whether stochastic mechanisms are in turn responsible for bringing about those assisted searches.” (http://www.designinference.com/documents/2005.03.Searching_Large_Spaces.pdf).

Up until this juncture, Perakh admits, “Within the scope of their legitimate interpretation—when the conditions assumed for their derivation hold—the NFL theorems certainly apply” to evolutionary algorithms.  The only question so far in his critique up until this section was that he has argued the NFL theorems do not hold in the case of coevolution.  However, subsequent to this critique, Dembski resolved those issues.

Here, Perakh reasons that even if the NFL theorems were valid for coevolution, he still rejects Dembski’s work because they are irrelevant.  According to Perakh, if evolutionary algorithms can outperform random sampling, aka a “blind search,” then the NFL theorems are meaningless.  Perakh bases this assertion on the statement by Dembski on page 212 of No Free Lunch, which provides, “The No Free Lunch theorems show that evolutionary algorithms, apart from careful fine-tuning by a programmer, are no better than blind search and thus no better than pure chance.”

Therefore, for Perakh, if evolutionary algorithms refute this comment by Dembski by outperforming a blind search, then this is evidence the algorithms are capable of generating CSI.  If evolutionary algorithms generate CSI, then Dembski’s NFL theorems have been soundly falsified, along with ID Theory as well.  If such were the case, then Perakh would be correct, the NFL theorems would indeed be irrelevant.

Perakh rejects the intelligent design “careful fine-tuning by a programmer” terminology in favor of just as reasonable of a premise:

“If, though, a programmer can design an evolutionary algorithm which is fine-tuned to ascend certain fitness landscapes, what can prohibit a naturally arising evolutionary algorithm to fit in with the kinds of landscape it faces?” (Page 19)

Perakh explains how his thesis can be illustrated:

“Naturally arising fitness landscapes will frequently have a central peak topping relatively smooth slopes. If a certain property of an organism, such as its size, affects the organism’s survivability, then there must be a single value of the size most favorable to the organism’s fitness. If the organism is either too small or too large, its survival is at risk. If there is an optimal size that ensures the highest fitness, then the relevant fitness landscape must contain a single peak of the highest fitness surrounded by relatively smooth slopes” (Page 20).

The graphs in Fig. 11.1 schematically illustrate Perakh’s thesis:

Fitness Function

This is Figure 11.1 in Perakh’s book – Fitness as a function of some characteristic, in this case the size of an animal. Solid curve – the schematic presentation of a naturally arising fitness function, wherein the maximum fitness is achieved for a certain single-valued optimal animal’s size. Dashed curve – an imaginary rugged fitness function, which hardly can be encountered in the existing biosphere.

Subsequent to Perakh’s book published in 2004, Dembski did indeed resolve the issue raised here in his paper, “Conservation of Information in Search: Measuring the Cost of Success” (Sept. 2009), http://evoinfo.org/papers/2009_ConservationOfInformationInSearch.pdf. Dembski’s “Conservation of Information” paper starts with the foundation that there have been laws of information already discovered, and that idea’s such as Perakh’s thesis were falsified back in 1956 by Leon Brillouin, a pioneer in information theory.   Brillouin wrote, “The [computing] machine does not create any new information, but it performs a very valuable transformation of known information” (L. Brillouin, Science and Information Theory. New York: Academic, 1956).

In his paper, “Conservation of Information,” Dembski and his coauthor, Robert Marks, go on to demonstrate how laws of conservation of information render evolutionary algorithms incapable of generating CSI as Perakh had hoped for.  Throughout this chapter, Perakh continually cited the various works of information theorists, Wolpert and Macready.  On page 1051 in “Conservation of Information” (2009), Dembski and Marks also quote Wolpert and Macready:

“The no free lunch theorem (NFLT) likewise establishes the need for specific information about the search target to improve the chances of a successful search.  ‘[U]nless you can make prior assumptions about the . . . [problems] you are working on, then no search strategy, no matter how sophisticated, can be expected to perform better than any other.’  Search can be improved only by “incorporating problem-specific knowledge into the behavior of the [optimization or search] algorithm” (D. Wolpert and W. G. Macready, ‘No free lunch theorems for optimization,’ IEEE Trans. Evol. Comput., vol. 1, no. 1, pp. 67–82, Apr. 1997).”

In “Conservation of information” (2009), Dembski and Marks resoundingly demonstrate how conservation of information theorems indicate that even a moderately sized search requires problem-specific information to be successful.  The paper proves that any search algorithm performs, on average, as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure.

Throughout “Conservation of information” (2009), the paper discusses evolutionary algorithms at length:

“Christensen and Oppacher note the ‘sometimes outrageous claims that had been made of specific optimization algorithms.’ Their concern is well founded. In computer simulations of evolutionary search, researchers often construct a complicated computational software environment and then evolve a group of agents in that environment. When subjected to rounds of selection and variation, the agents can demonstrate remarkable success at resolving the problem in question.  Often, the claim is made, or implied, that the search algorithm deserves full credit for this remarkable success. Such claims, however, are often made as follows: 1) without numerically or analytically assessing the endogenous information that gauges the difficulty of the problem to be solved and 2) without acknowledging, much less estimating, the active information that is folded into the simulation for the search to reach a solution.” (Conservation of information, page 1058).

Dembski and Marks remind us that the concept Perakh is suggesting for evolutionary algorithms to outperform a blind search is the same scenario in the analogy of the proverbial monkeys typing on keyboards.

The monkeys at typewriters is a classic analogy to describe the chances of evolution being successful to achieve specified complexity.

A monkey at a typewriter is a good illustration for the viability of random evolutionary search.  Dembski and Marks run the calcs for good measure using factors of 27 (26 letter alphabet plus a space) and a 28 character message.  The answer is 1.59 × 1042, which is more than the mass of 800 million suns in grams.

In their Conclusion, Dembski and Marks state:

 “Endogenous information represents the inherent difficulty of a search problem in relation to a random-search baseline. If any search algorithm is to perform better than random search, active information must be resident. If the active information is inaccurate (negative), the search can perform worse than random. Computers, despite their speed in performing queries, are thus, in the absence of active information, inadequate for resolving even moderately sized search problems. Accordingly, attempts to characterize evolutionary algorithms as creators of novel information are inappropriate.” (Conservation of information, page 1059).

9.  THE DISPLACEMENT “PROBLEM”

This argument is based upon the claim by Dembski in page 202 of his book, “No Free Lunch, “ in which he states, “The significance of the NFL theorems is that an information-resource space J does not, and indeed cannot, privilege a target T.”  However, Perakh highlights a problem with Dembski’s statement because the NFL theorems contain nothing about any arising ‘information-resource space.’  If Dembski wanted to introduce this concept within the framework of the NFL theorems, then he should have at least shown what the role of an “information-resource space” is in view of the “black-box” nature of the algorithms in question.

On page 203 of No Free Lunch, Dembski introduces the displacement problem:

“… the problem of finding a given target has been displaced to the new problem of finding the information j capable of locating that target. Our original problem was finding a certain target within phase space. Our new problem is finding a certain j within the information-resource space J.”

Perakh adds that the NFL theorems are indifferent to the presence or absence of a target in a search, which leaves the “displacement problem,” with its constant references to targets, hanging in the air.

Dembski’s response is as follows:

What is the significance of the Displacement Theorem? It is this. Blind search for small targets in large spaces is highly unlikely to succeed. For a search to succeed, it therefore needs to be an assisted search. Such a search, however, resides in a target of its own. And a blind search for this new target is even less likely to succeed than a blind search for the original target (the Displacement Theorem puts precise numbers to this). Of course, this new target can be successfully searched by replacing blind search with a new assisted search. But this new assisted search for this new target resides in a still higher-order search space, which is then subject to another blind search, more difficult than all those that preceded it, and in need of being replaced by still another assisted search.  And so on. This regress, which I call the No Free Lunch Regress, is the upshot of this paper. It shows that stochastic mechanisms cannot explain the success of assisted searches.

“This last statement contains an intentional ambiguity. In one sense, stochastic mechanisms fully explain the success of assisted searches because these searches themselves constitute stochastic mechanisms that, with high probability, locate small targets in large search spaces. Yet, in another sense, for stochastic mechanisms to explain the success of assisted searches means that such mechanisms have to explain how those assisted searches, which are so effective at locating small targets in large spaces, themselves arose with high probability.  It’s in this latter sense that the No Free Lunch Regress asserts that stochastic mechanisms cannot explain the success of assisted searches.” [Searching Large Spaces: Displacement and the No Free Lunch Regress (2005)].

Perakh makes some valid claims.  About seven years later after the publication of Perakh’s book, Dembski provided updated calcs to the NFL theorems and his application of math to the displacement problem.  This is available for review in his paper, “The Search for a Search: Measuring the Information Cost of Higher Level Search” (2010).

Perakh discusses the comments made by Dembski to support the assertion CSI must be necessarily “smuggled” or “front-loaded” into evolutionary algorithms.  Perakh outright rejects Dembski’s claims, and proceeds to dismiss Dembski’s work on very weak grounds in what appears to be a hand-wave, begging the question as to how the CSI was generated in the first place, and overall circular reasoning.

Remember that the basis of the NFL theorems is to show that when CSI shows up in nature, it is only because it originated earlier in the evolutionary history of that population, and got smuggled into the genome of a population by regular evolution.   The CSI might have been front-loaded millions of years earlier in the biological ancestry.  The front-loading of the CSI may have occurred possibly in higher taxa.  Regardless from where the CSI originated, the claim by Dembski is that the CSI appears visually now because it was inserted earlier because evolutionary processes do not generate CSI.

The smuggling forward of CSI in the genome is called displacement.  The reason why the alleged law of nature called displacement occurs is because when applying Information Theory to identify CSI, the target of the search theorems is the CSI itself.  Dembski explains,

“So the search of the original space gets displaced to a search of an informational space in which the crucial information that constrains the search of the original space resides. I then argue that this higher-order informational space (‘higher’ with respect to the original search space) is always at least as big and hard to search as the original space.” (Evolution’s Logic of Credulity: An Unfettered Response to Allen Orr, 2012.)

It is important to understand what Dembski means by displacement here because Perakh distorts displacement to mean something different in this section.  Perakh asserts:

“An algorithm needs no information about the fitness function. That is how the ‘black-box’ algorithms start a search. To continue the search, an algorithm needs information from the fitness function. However, no search of the space of all possible fitness function is needed. In the course of a search, the algorithm extracts the necessary information from the landscape it is exploring. The fitness landscape is always given, and automatically supplies sufficient information to continue and to complete the search.” (Page 24)

To support these contentions, Perakh references Dawkins’s weasel algorithm for comparison.  The weasel algorithm, says Perakh, “explores the available phrases and selects from them using the comparison of the intermediate phrases with the target.” Perakh then argues the fitness function has in the weasel example the built-in information necessary to perform the comparison.  Perakh then concludes,

“This fitness function is given to the search algorithm; to provide this information to the algorithm, no search of a space of all possible fitness functions is needed and therefore is not performed.” (Emphasis in original, Page 24)

If Perakh is right, then the same is true for natural evolutionary algorithms. Having bought his own circular reasoning he then declares that his argument therefore renders Dembski’s “displacement problem” to be “a phantom.” (Page 24)

One of the problems with this argument is that Perakh admits that there is CSI, and offers no explanation as to how it originates and increases in the genome of a population that results in greater complexity.  Perakh is begging the question.  He offers no math, no algorithm, no calcs, no example.  He merely imposes his own properties of displacement upon the application, which is a strawman argument, and then shoots down displacement.  There’s no attempt to derive how the algorithm ever finds the target in the first place, which is disappointing given that Dembski provides the math to support his own claims.

Perakh appears to be convinced that evolutionary algorithmic searches taking place in the biological world are highly effective assisted searches that successfully locate target biological structures and functions.  And, as such, he is satisfied that these evolutionary algorithms can generate CSI. What Perakh needs to remember is that a genuine evolutionary algorithm is still a stochastic mechanism. The hypothetical success of the evolutionary algorithm says nothing about whether stochastic mechanisms are in turn responsible for bringing about those assisted searches.  Dembski explains,

“Evolving biological systems invariably reside in larger environments that subsume the search space in which those systems evolve. Moreover, these larger environments are capable of dramatically changing the probabilities associated with evolution as occurring in those search spaces. Take an evolving protein or an evolving strand of DNA. The search spaces for these are quite simple, comprising sequences that at each position select respectively from either twenty amino acids or four nucleotide bases. But these search spaces embed in incredibly complex cellular contexts. And the cells that supply these contexts themselves reside in still higher-level environments.” [Searching Large Spaces: Displacement and the No Free Lunch Regress (2005), pp. 31-32]

Dembski argues that the uniform probability on the search space almost never characterizes the system’s evolution, but instead it is a nonuniform probability that brings the search to a successful conclusion.  The larger environment brings upon the scenario the nonuniform probability.  Dembski notes that Richard Dawkins made the same point as Perakh in Climbing Mount Improbable (1996).  In that book, Dawkins argued that biological structures that at first appearance seem impossible with respect to the uniform probability, blind search, pure randomness, etc., become probable when the probabilities are reset by evolutionary mechanisms.

Propagation

This diagram shows propagation of active information
through two levels of the probability hierarchy.

The kind of search Perakh presents is also addressed in “The Search for a Search: Measuring the Information Cost of Higher Level Search” (2010).  The blind search Perakh complains of is that of uniform probability.  In this kind of problem, given any probability measure on Ω, Dembski’s calcs indicate the active entropy for any partition with respect to a uniform probability baseline will be nonpositive (The Search for a Search, page 477).  We have no information available about the search in Perakh’s example.  All Perakh gives us is that the fitness function is providing the evolutionary algorithm clues so that the search is narrowed.  But, we don’t know what that information is.  Perakh’s just speculating that given enough attempts, the evolutionary algorithm will get lucky and outperform the blind search.  Again, this describes uniform probability.

According to Dembski’s much intensified mathematical analysis, if no information about a search exists so that the underlying measure is uniform, which matches Perakh’s example, “then, on average, any other assumed measure will result in negative active information, thereby rendering the search performance worse than random search.” (The Search for a Search, page 477).

Dembski expands on the scenario:

“Presumably this nonuniform probability, which is defined over the search space in question, splinters off from richer probabilistic structures defined over the larger environment. We can, for instance, imagine the search space being embedded in the larger environment, and such richer probabilistic structures inducing a nonuniform probability (qua assisted search) on this search space, perhaps by conditioning on a subspace or by factorizing a product space. But, if the larger environment is capable of inducing such probabilities, what exactly are the structures of the larger environment that endow it with this capacity? Are any canonical probabilities defined over this larger environment (e.g., a uniform probability)? Do any of these higher level probabilities induce the nonuniform probability that characterizes effective search of the original search space? What stochastic mechanisms might induce such higher-level probabilities?  For any interesting instances of biological evolution, we don’t know the answer to these questions. But suppose we could answer these questions. As soon as we could, the No Free Lunch Regress would kick in, applying to the larger environment once its probabilistic structure becomes evident.” [Searching Large Spaces: Displacement and the No Free Lunch Regress (2005), pp. 32]

The probabilistic structure would itself require explanation in terms of stochastic mechanisms.  And, the No Free Lunch Regress blocks any ability to account for assisted searches in terms of stochastic mechanisms. (“Searching Large Spaces: Displacement and the No Free Lunch Regress” (2005).

Today, Dembski has updated his theorems to present by supplying additional math and contemplations.  The NFL theorems today are analyzed in both a vertical and horizontal considerations in three-dimensional space.

3-D Geometry

3-D Geometric Application of NFL Theorems

This diagram shows a three dimensional simplex in {ω1, ω2, ω3}The numerical values of a1, a2 and a3 are one.  The 3-D box in the figure presents two congruent triangles in a geometric approach to presenting a proof of the Strict Vertical No Free Lunch Theorem.  In “The Search for a Search: Measuring the Information Cost of Higher Level Search” (2010), the NFL theorems are analyzed both horizontally and vertically.  The Horizontal NFL Theorem pertains to showing the average relative performance of searches never exceeds unassisted or blind searches.  The Vertical NFL Theorem shows that the difficulty of searching for a successful search increases exponentially with respect to the minimum allowable active information being sought.

This leads to the displacement principle, which holds that “the search for a good search is at least as difficult as a given search.”   Perakh might have raised a good point, but Dembski’s done the math and confirmed his theorems are correct.  Dembski’s math does work out, he’s provided the proofs, and shown the work.  On the other hand, Perakh merely offered an argument that was nothing but an unverified speculation with no calcs to validate his point.

V.  CONCLUSION

In the final section of this chapter, Perakh reiterates the main points throughout his article for review. He begins by saying,

“Dembski’s critique of Dawkins’s ‘targeted’ evolutionary algorithm fails to repudiate the illustrative value of Dawkins’s example, which demonstrates how supplementing random changes with a suitable law increases the rate of evolution by many orders of magnitude.” (Page 25)

No, this is a strawman.  There was nothing Perakh submitted to establish such a conclusion.  Neither Dembski or the Discovery Institute have any dispute with Darwinian mechanisms of evolution.  The issue is whether ONLY such mechanisms are responsible for specified complexity (CSI).  Intelligent Design proponents do not challenge that “supplementing random changes with a suitable law increases the rate of evolution by many orders of magnitude.”

Next, Perakh claims, “Dembski ignores Dawkins’s targetless’ evolutionary algorithm, which successfully illustrates spontaneous increase of complexity in an evolutionary process.” (Page 25).

No, this isn’t true.  First, Dembski did not ignore the Dawkins’ weasel algorithm.  Second, the weasel algorithm isn’t targetless.  We’re given the target up front.  We know exactly what it is.  Third, the weasel algorithm did not show any increase in specified complexity. All the letters in the sequence already existed. When evolution operates in the real biological world, the genome of the population is reshuffled from one generation to the next.  No new information is increasing leading to greater complexity.  The morphology is a result from the same information being rearranged.

In the case of the Weasel example, the target was already embedded in the original problem, just like one and only one full picture is possible to assemble from pieces of a jigsaw puzzle.  When the puzzle is completed, not one piece should be missing, unless one was lost, and there should not be one extra piece too many.  The CSI was the original picture that was cut up into pieces to be reassembled.  The Weasel example is actually a better illustration for front-loading.  All the algorithm had to do was figure out how to arrange the letters back into the proper intelligible sequence.

The CSI was specified in the target or fitness function up front to begin with.  The point of the NFL theorems indicates that if the weasel algorithm was a real life evolutionary example, then that complex specified information (CSI) would have been inputted into the genome of that population in advance.  But, the analogy quickly breaks down for many reasons.

Perakh then falsely asserts, “Contrary to Dembski’s assertions, evolutionary algorithms routinely outperform a random search.”  (Page 25). This is false.  Perakh speculated that this was a possibility, and Dembski clearly not only refuted it, but demonstrated that evolutionary algorithms essentially never outperform a random search.

Perakh next maintains:

“Contrary to Dembski assertion, the NFL theorems do not make Darwinian evolution impossible. Dembski’s attempt to invoke the NFL theorems to prove otherwise ignores the fact that these theorems assert the equal performance of all algorithms only if averaged over all fitness functions.” (Page 25).

No, there’s no such assertion by Dembski.  This is nonsense.  Intelligent Design proponents do not assert any false dichotomy.  ID Theory supplements evolution, providing the conjecture necessary to really explain the specified complexity.  Darwinian evolution still occurs, but it only explains inheritance and diversity.  It is ID Theory that explains complexity.  As far as the NFL theorems asserting the equal performance of all or any algorithms to solve blind searches, this is ridiculous and never was established by Perakh.

Perakh also claims:

“Dembski’s constant references to targets when he discusses optimization searches are based on his misinterpretation of the NFL theorems, which entail no concept of a target. Moreover, his discourse is irrelevant to Darwinian evolution, which is a targetless process.” (Page 25).

No, Dembski did not misinterpret the very NFL theorems that he invented.  The person that misunderstands and misrepresents them is Perakh.  It is statements like this that cause one to suspect of Perakh understands what CSI might be, either.  If you notice the trend in his writing, when Perakh looked for support for an argument, he referenced those who have authored rebuttals in opposition to Dembski’s work.  But, when Perakh looked for an authority to explain the meaning of Dembski’s work, Perakh nearly always cited Dembski himself.  Perakh never performs any math to support his own challenges.  Finally, Perakh ever established anywhere that Dembski misunderstood or misapplied any of the principles of Information Theory.

Finally, Perakh ends the chapter with this gem:

“The arguments showing that the anthropic coincidences do not require the hypothesis of a supernatural intelligence also answer the questions about the compatibility of fitness functions and evolutionary algorithms.” (Page 25).

This is a strawman.  ID Theory has nothing to do with the supernatural.  If it did, then it would not be a scientific theory by definition of science, which is bases upon empiricism.   As one can certainly see is obvious in this debate is that Intelligent Design theory is more aligned to Information Theory than most sciences.  ID Theory is not about teleology, but is more about front-loading.

William Dembski’s work is based upon pitting “design” against chance. In his book, The Design Inference he used mathematical theorems and formulas to devise a definition for design based upon a mathematical probability. It’s an empirical way to work with improbable complex information patterns and sequences. It’s called specified complexity, or aka complex specified information (CSI). There’s no contemplation as to the source of the information other than it being front-loaded.  ID Theory only involves a study of the information (CSI) itself. Design = CSI. We can study CSI because it is observable.

There is absolutely no speculation of any kind to suggest that the source of the information is by extraterrestrial beings or any other kind of designer, natural or non-natural. The study is only the information (CSI) itself — nothing else. There are several non-Darwinian conjectures as to how the information can develop without the need for designers.  Other conjectures are panspermia, natural genetic engineering, and what’s called “front-loading.”

In ID, “design” does not require designers. It can be equated to be derived from “intelligence” as per William Dembski’s book, “No Free Lunch,” but he uses mathematics to support his work, not metaphysics. The intelligence could be illusory. All the theorems detect is extreme improbability because that’s all the math can do. And, it’s called “Complex Specified Information.” It’s the Information that ID Theory is about. There’s no speculation into the nature of the intelligent source, assuming that Dembski was right in determining the source is intelligent in the first place. All it takes really is nothing other than a transporter of the information, which could be an asteroid, which collides with Earth carrying complex DNA in the genome of some unicellular organism. You don’t need a designer to validate ID Theory; ID has nothing to do with designers except for engineers and intelligent agents that are actually observable.

Posted in COMPLEX SPECIFIED INFORMATION (CSI) | Tagged , , , , , , , , | 2 Comments

COMPLEX SPECIFIED INFORMATION (CSI) – An Explanation of Specified Complexity

This entry is a sequel to my original blog essay on CSI, which was a more elementary discussion that can be reviewed here.   Complex Specified Information (CSI) is also called specified complexity.  CSI is a concept that is not original to Dr. William Dembski.  Specified Complexity was first noted in 1973 by Origin of Life researcher, Leslie Orgel:

Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity. [ L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added.]

Before beginning this discussion on CSI, it should be understood first as to why it is important.  The scientific theory of Intelligent Design (ID) is based upon important concepts, such as design, information, and complexity.  Design in the context of ID Theory is discussed in terms of CSI.  The following is the definition of ID Theory:

Intelligent Design Theory in Biology is the scientific theory that artificial intervention is a universally necessary condition of the first initiation of life, development of the first cell, and increasing information in the genome of a population leading to greater complexity evidenced by the generation of original biochemical structures.

Authorities:

* Official Discovery Institute definition: http://www.intelligentdesign.org/whatisid.php
* Stephen Meyer’s definition: http://www.discovery.org/v/1971
* Casey Luskin’s Discussion: http://www.evolutionnews.org/2009/11/misrepresenting_the_definition028051.html
* William Dembski’s definition: http://www.uncommondescent.com/id-defined

Please observe that this expression of ID Theory does not appeal to any intelligence or designer. Richard Dawkins was correct when he said that what is thought to be design is illusory. Design is defined by William Demski as Complex Specified Information (CSI).

“Intelligent Design” is an extremely anthropomorphic concept in itself.  The Discovery Institute does not work much with the term “intelligence.” The key to ID Theory is not in the term “intelligence,” but in William Dembski’s work in defining design. And, that is “Complex Specified Information” (CSI). It’s CSI that is the technical term that ID scientists work with. Dembski produced the equations, ran the calculations, and provided a scientifically workable method to determine whether some phenomenon is “designed.” According to Dembski’s book, “The Design Inference” (1998), CSI is based upon statistical probability.

CSI is based upon the theorem:

sp(E) and SP(E) —-> D(E)

When a small probably (SP) event (E) is complex, and

SP(E) = [P (E|I) < the Universal Probability Bound]. Or, in English, we know an event E is a small probably event when the probability of event E given I is less than the Universal Probability Bound. I = All relevant side information and all stochastic hypotheses. This is all in Dembski’s book, “The Design Inference.”

An event E is specified by a pattern independent of E, or expressed mathematically: sp(E). Upper case letters SP(E) are the small probability event we are attempting to determine is CSI, or designed. Lower case letters sp(E) are a prediction that we will discover the SP(E). Therefore, if sp(E) and SP(E) then —-> D(E). D(E) means the event E is not only small probability, but we can conclude it is designed.

Royal Flush

Dembski’s Universal Probability Bound = 0.5 x 10–150, or 0.5 times 10 to the exponent negative 150 power. This is the magic number when one can scientifically be justified to invoke design. It’s been said that using Dembski’s formula, the probability that Dembski states must be matched in order to ascribe design is to announce in advance before dealing that you are going to be dealt 24 Royal Flushes in a row, and then the event plays out just exactly as the advance forecast. In other words, just as intelligence might be entirely illusory, so likewise CSI is nothing other than a mathematical ratio that might not have anything in the world to do with actual design.

The probability of dealing a Royal Flush given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement is 649,739 to 1. According to Dembski, if someone were to be dealt a Royal Flush 24 times in a row upon the advance announcement predicting such a happening would take place, his contentions would be that it was so improbable that someone cheated, or “design” would have had to been involved.

(I should say parenthetically here that I am oversimplifying CSI just for the sake of making this point that we have already imposed upon the term “design” a technical definition that requires no intelligence or design as we understand the everyday normal use of the words. Therefore, for one to take issue with the ingenious marketing term “Intelligent Design” is meaningless because what label the theory is called is irrelevant.  Such a dispute on that issue is nothing other than haggling about nomenclature. The Discovery Institute could have labeled their product by any name. I would have preferred the title, “Bio-information Theory,” but the name is inconsequential.)

What is a very helpful concept to understand about CSI is that just as improbable as it is to be dealt a Royal Flush, so likewise the level of difficulty natural selection is up against to produce what appears to be designed in nature. And, when CSI is observed in nature, which occurs occasionally, then that not only confirms ID predictions, and defies Darwinian gradualism, but also tips a scientist a clue that such might be evidence of additional ID-related mechanisms at work.

It is true that William Dembski’s theorems are based upon an assumption that we can quantify everything in the universe; no argument there. But, he only used that logic to derive his Universal Probability Bound, which is nearly an infinitesimally small number: 0.5 x 10 –150 , or 0.5 times 10 to the exponent negative 150 power.  Do you not think that when a probability is this low that it is a safe bet to invoke corruption of natural processes by an intelligent agency? The number is a useful number. If anyone believes the ratio Dembski submitted is flawed, I would request that person offer a different number that they believe would more accurately eliminate chance in favor of design.

Design theorists are interested in studying complexity.  The more complex the information is, the better.  CSI is best understood as some kind of pattern that is so improbable that the chances of such a configuration occurring by sheer happenstance is extremely small. Dembksi’s formulas and theorems work best when there is an extreme low probability.

It is a given that we do not know everything in the universe, such as intangible variables as dark matter, where neutrinos go when they zip in and out of our universe, and cosmic background radiation. Dembski was aware of these unknown variables, it isn’t as if he ignored them when deriving his theorems. The Universal Probability Bound is not a perfectly absolute number, but it is very much a scientifically workable number no less credible than the variables used to work the equations in support of the Big Bang Theory. So, if one seeks to disqualify CSI on the sole basis that we do not know everything in the universe, then they just eliminated the Big Bang Theory as scientifically viable.

A religious person is welcome to invoke a teleological inference of a deity, but the moment one does that, they have departed from science and are entertaining religion.  CSI might or might not infer design. That’s the whole point of Dembski’s book, “The Design Inference.” In the book he expands on the meaning of CSI, and then proceeds to present his reasoning as to why CSI infers design. While those who reject ID Theory are seeing invisible designers hiding behind every tree, the point Dembski makes is we must first establish design to begin with.

The Delicate Arch in Utah.  Is this bridge a product of design?

The Delicate Arch in Utah is a natural bridge.  It is difficult to debate whether this is an example of specified complexity because some might argue the natural arch is “specified” in the sense that it is a natural bridge.

The arguments in favor that natural arches are specified would emphasize the meaningfulness and functionality of the monument as a bridge.  Also, just the mere fact that attention is be drawn to this particular natural formation is in-and-of-itself a form of specification.

Arguments in opposition that such a natural arch is specified would emphasize the fact that human experience has already observed geological processes are capable of producing such a natural formation.  Also, a natural arch is a one-of-a-kind structure where no two arches resemble each other to such detail that the identity of one could be mistaken for the other.  Finally, the concept emphasized by William Dembski is that specification relates to a prediction.  In other words, had someone drawn this arch in advance on a piece of paper even though they had never seen the actual monument, and then later the land formation is discovered in Utah, which happens to be an exact replica of the drawing, then the design theorists will declare the information is specified.

The meaning of the term specified is very important to understanding CSI.  The term “specified” in a certain sense either directly or indirectly refers to a prediction. If someone deals you a Royal Flush, the pattern would be complex. If you’re dealt a Royal Flush again several consecutive times, someone at the poker table is going to accuse you of cheating. The sequence now is increasing in improbability and complexity.  A Royal Flush is specified because it is a pattern that many people are aware of and have identified in advance.

Now, if you or the dealer ANNOUNCES IN ADVANCE that they are gong to deal you a Royal Flush, and sure enough it happens, that there is no longer any question that the target sequence was “specified.”

Dembski explains how the item of being specified might be best understood in discussing what he calls Conditionally independent patterns.  In applying his Explanatory Filter, Dembski states:

The patterns that in the presence of complexity or improbability implicate a designing intelligence must be independent of the event whose design is in question. Crucial here is that patterns not be artificially imposed on events after the fact. For instance, if an archer shoots arrows at a wall and we then paint targets around the arrows so that they stick squarely in the bull’s-eyes, we impose a pattern after the fact. Any such pattern is not independent of the arrow’s trajectory. On the other hand, if the targets are set up in advance (“specified”) and then the archer hits them accurately, we know it was not by chance but rather by design (provided, of course, that hitting the targets is sufficiently improbable). The way to characterize this independence of patterns is via the probabilistic notion of conditional independence. A pattern is conditionally independent of an event if adding our knowledge of the pattern to a chance hypothesis does not alter the event’s probability under that hypothesis. The specified in specified complexity refers to such conditionally independent patterns. These are the specifications.  [From: William A. Dembski, The Design Revolution: Answering the Toughest Questions About Intelligent Design (Downers Grove, IL: InterVarsity Press, 2004), 81.]

Mount Rushmore is an example of CSI because it relays information that is both complex and specified. More than just complexity of a hillside, it features the specified information of identifiable U.S. presidents.

Is Mount Rushmore a product of natural processes or an intelligent cause?  Most people would likely agree that this rock formation in Black Hills, South Dakota is the result of intelligent design.   I believe that an intelligent agent is responsible for this rock formation. This is based upon reasoning.  Notice that when you determined for yourself this monument is a deliberately sculptured work done by an intelligent cause (I assume you did so) that you did not draw upon a religious view to arrive at that conclusion.

 

The crevices and natural coloration of the rock at Eagle Rock, California portray a remarkable illusion to an eagle in flight. 

Snowflakes are also considered when contemplating CSI.  Snowflakes are very complex, and appear to also be specified.  However, in spite of the great detail, recognizable pattern, and beauty of a snowflake, no two snowflakes are alike.  A snowflake would be specified if a second one were to be found that identically matched the first.

Snowflake1

Snowflake2Snowflake3

 The shapes of snow crystals are due to the laws of physics, which determines their regular geometric 6-pointed pattern.  As such, a snowflake has no CSI whatever because snowflakes are produced by natural processes.  The snowflake is complex, but not complex specified information.  Meteorological conditions also are a factor in the shape formation of a snow crystal.  So, snow is a product of both physical laws and chance.  There’s one thing to note about snow crystals.  Due to the fact that they form from atmospheric conditions governed by laws of physics, the complexity is noted, but they still have a degree of simplicity to them in spite of the infinite number of configurations they might appear as in shape.

William Dembski has been called on snowflakes in the past by his critics who see snowflakes to be just as every bit complex as other simple objects that are known to be designed.   It is true that the complexity of snow crystals make them good candidates for evidence of design.  This is why the concept of being specified is so important.  As intricate as details there might be found with snow, it is the lack of specificity that causes snow crystals to not be CSI.  The shortcut way to test whether snowflakes are designed would be to find two snowflakes that are identical.  The probability for one snowflake to exist is 1 to 1.  It’s the fact that the probability is small for the identical replica to reoccur a second time that would be evidence of design. This is what is meant by being specified.  Specification in the context of ID Theory is not mere intricacy of detailed patterns alone. 

While some ID critics believe snowflakes refute Dembski’s Explanatory Filter because they consider the extreme low probability to infer that snowflakes are designed, I see it as just the opposite.  It is the fact that we know as a given snowflakes are not designed that should lend us confidence in the cases which the Explanatory Filter determines some feature is designed.

This brings up an important point about CSI.  There are many instances where information is highly complex and appears to be specified as well, such as snowflakes.   Information can be arranged in various different degrees of complexity and specificity.   Yet, it is only CSI when the improbability reaches the Universal Probability Bound.  Then what would we call something that just looks like CSI, but it isn’t CSI because a pattern is determined not to be designed upon application of Dembski’s Explanatory Filter?   When CSI just looks like it might be CSI, but it actually isn’t, just like snowflakes, then Dembski calls this Specificational complexity

Dembski explains:

Because they are patterns, specifications exhibit varying degrees of complexity. A specification’s degree of complexity determines how many specificational resources must be factored in when gauging the level of improbability needed to preclude chance (see the previous point). The more complex the pattern, the more specificational resources must be factored in. The details are technical and involve a generalization of what mathematicians call Kolmogorov complexity. Nevertheless, the basic intuition is straightforward. Low specificational complexity is important in detecting design because it ensures that an event whose design is in question was not simply described after the fact and then dressed up as though it could have been described before the fact.

To see what’s at stake, consider the following two sequences of ten coin tosses: HHHHHHHHHH and HHTHTTTHTH. Which of these would you be more inclined to attribute to chance? Both sequences have the same probability, approximately 1 in 1,000. Nevertheless, the pattern that specifies the first sequence is much simpler than the second. For the first sequence the pattern can be specified with the simple statement “ten heads in a row.” For the second sequence, on the other hand, specifying the pattern requires a considerably longer statement, for instance, “two heads, then a tail, then a head, then three tails, then heads followed by tails and heads.” Think of specificational complexity (not to be confused with specified complexity) as minimum description length. (For more on this, see <www.mdl-research.org>.)

For something to exhibit specified complexity it must have low specificational complexity (as with the sequence HHHHHHHHHH, consisting of ten heads in a row) but high probabilistic complexity (i.e., its probability must be small). It’s this combination of low specificational complexity (a pattern easy to describe in relatively short order) and high probabilistic complexity (something highly unlikely) that makes specified complexity such an effective triangulator of intelligence. But specified complexity’s significance doesn’t end there.

Besides its crucial place in the design inference, specified complexity has also been implicit in much of the self-organizational literature, a field that studies how complex systems emerge from the structure and dynamics of their parts. Because specified complexity balances low specificational complexity with high probabilistic complexity, specified complexity sits at that boundary between order and chaos commonly referred to as the “edge of chaos.” The problem with pure order (low specificational complexity) is that it is predictable and thus largely uninteresting. An example here is a crystal that keeps repeating the same simple pattern over and over. The problem with pure chaos (high probabilistic complexity) is that it is so disordered that it is also uninteresting. (No meaningful patterns emerge from pure chaos. An example here is the debris strewn by a tornado or avalanche.) Rather, it’s at the edge of chaos, neatly ensconced between order and chaos, that interesting things happen. That’s where specified complexity sits. [From: William A. Dembski, The Design Revolution: Answering the Toughest Questions About Intelligent Design (Downers Grove, IL: InterVarsity Press, 2004), 81.]

THAT’S WHY I NEVER WALK IN THE FRONT!

This Far Side cartoon is an illustration that Michael Behe uses in his lectures to demonstrate that we can often deduce design.  Even though there is no evidence of any trappers in sight, the scene is obvious that the snare was intentionally designed, as such machinery does not naturally exist by sheer happenstance.  Behe also makes the point that there was no religious contemplation required to conclude that someone deliberately set this trap even though the source that assembled the machine is absent.

This next image is extremely graphic, but illustrates the same point.  Here, a Blue Duiker as been trapped in a snare.

Sometimes people just can’t decide whether a formation of information is a result of happenstance or intelligence.  A perfect example of what looks like design but might not be is the monuments on Mars.   Are the monuments on Mars caused by chance or design?  Are these formations on the planet natural processes or artificial?  Here’s some more interesting images that help someone better understand CSI in a simple way.

It’s also interesting to note that those antagonists who so quickly scoff at ID because of the unfair inference of designers are automatically conceding design as a given. The teleological inference works both ways, if design points to a designer, then designers require design. Without design, a designer does not exist.

As such, if one desires to oppose ID Theory, a preferable argument would be to insist design does not appear in nature, and abandon the teleological inference.

Here’s more on Complex Specified Information (CSI):

* From Discovery Institute, http://www.ideacenter.org/contentmgr/showdetails.php/id/832

* By Dembski himself, http://www.designinference.com/documents/02.02.POISK_article.htm

William Dembski’s book, “The Design Inference” (http://www.designinference.com/desinf.htm).   The Discovery Institute has written CSI here and here.

Darwinian mechanisms (which are based upon chance) will most likely not be the cause of CSI because CSI is by definition a small probability event.  CSI is not zero probability, it is small probability.  There is still a possibility that Darwinian mechanisms could produce CSI, but CSI is more likely to be caused by something that replaces the element of chance.  Darwinian mechanisms are based upon chance.  CSI is a low probability ratio that exposes the absence of chance.  Whatever the absent of chance is (call it intelligence, design, artificial interruption, quantum theory, an asteroid, abiogenesis, RNA self-replication, unknown molecular pre-life molecular configuration, epigenetics, or whatever) is the most likely cause of CSI.  As such, it is assume by ID scientists that CSI = design.

In another book written by Dembski, “Why Specified Complexity Cannot Be Purchased without Intelligence,” Dembski explains why he thinks that CSI is also linked with intelligence.   He further discusses his views here.

CSI is a mathematical ratio of probability that exposes a small probability event that reduces chance from the equation.  And, regardless of what you substitute to replace in the vacuum, the ID scientists substitute the design.  So, in ID Theory, whenever the word “design” appears, it means CSI.  And, therefore it is ridiculous and false to impose designers into the context of ID Theory because the ID definition of design is none other than CSI.

The point is that ID scientists define design as CSI.  Therefore, skeptics of ID Theory should cease invoking designers because all design means in terms of ID Theory is the mathematical absence of chance, which is mathematically expressed in terms of a low probability ratio.

CSI is an assumption, not an argument.  CSI is an axiom postulated up front based upon mathematical theorems.  It’s all couched in math.  Unless the small probably ratio reaches zero, then no one working out the calculations is going to say “cannot.”  CSI is assumed to be design, and it is assumed natural causes DON’T generate CSI because CSI by definition is a small probability event that favors the absence of chance.

We cannot be certain the source is an intelligent agency.  CSI is based upon probabilities.  There are many who credit Darwinian evolution to be the source of complexity.  This is illogical when running the design theorem calculations.  But, it is not impossible.  As Richard Dawkins has noted before, design can be illusory.  The hypothesis that Darwinian evolution is a cause for some small probability event SP (E) could be correct, but it is highly improbable according to the math.

One common demonstration to help people understand how CSI works is to take a letter sequence. This can be done with anything, but the common example is this pattern:

METHINKS•IT•IS•LIKE•A•WEASEL

This letter arrangement is used the most often to describe CSI because the math has already been worked out. The bullets represent spaces. There are 27 possibilities at each location in a symbol string 28 characters in length. If natural selection were entirely random it would take 1 x 1040 (that’s 10 to the 40th exponent, or 1 with 40 zeroes to get to the decimal point). It’s a small probability. However, natural selection (NS) is smarter than that, and Richard Dawkins has shown how NS achieves similar odds in nature in an impressive 43 attempts to get the answer correct (http://evoinfo.org/papers/ConsInfo_NoN.pdf).

In this example, the probability was only 1 x 1040. CSI is an even much higher number than that. If you take a pattern or model, such as METHINKS•IT•IS•LIKE•A•WEASEL, and you keep adding the information, you soon reach probabilities that are within the domain of CSI.

For more of Dembski’s applications using his theorems, you might like to reference these papers:

http://evoinfo.org/papers/2010_TheSearchForASearch.pdf

http://marksmannet.com/RobertMarks/REPRINTS/2010-EfficientPerQueryInformationExtraction.pdf

This is a continuation of Claude Shannon’s work. One of the most important contributors to ID Theory is American mathematician Claude Shannon (http://en.wikipedia.org/wiki/Claude_Shannon), who is considered to be the father of Information Theory (http://en.wikipedia.org/wiki/Information_Theory). Essentially, ID Theory is a sub-theory of Information Theory in the field of Bioinformatics. This is one of Dembski’s areas of expertise, http://evoinfo.org/publications/.

When Robert Deyes wrote a review on Stephen Meyer’s “Signature In The Cell,” he noted “When talking about ‘information’ and its relevance to biological design, Intelligent Design theorists have a particular definition in mind.”   Stephen Meyer explained in “Signature In The Cell” that information is: “the attribute inherent in and communicated by alternative sequences or arrangements of something that produce specific effects” (p.86).

Shannon was instrumental in the development of computer science. He invented the first robotic mouse, and computer chess. When Shannon unveiled his theory for quantifying information it included several axioms, one of which is information is inversely proportional to uncertainty. Similarly, design can be contrasted to chance.

It was Peter Medawar that referenced these theorems as “The Law of Conservation of Information.” Dembski’s critics have accused his applications as being too modified to be associated with the Law of Conservation of Information. There is no dispute that Dembski applied the modifications. He has modified them to apply to biology.

FROM UNCOMMON DESCENT GLOSSARY:

The Uncommon Descent blog further notes the following in re CSI:

The concept of complex specified information helps us understand the difference between (a) the highly informational, highly contingent aperiodic functional macromolecules of life and (b) regular crystals formed through forces of mechanical necessity, or (c) random polymer strings. In so doing, they identified a very familiar concept — at least to those of us with hardware or software engineering design and development or troubleshooting experience and knowledge. Furthermore, on massive experience, such CSI reliably points to intelligent design when we see it in cases where we independently know the origin story.

What Dembski did with the CSI concept starting in the 1990′s was to:

(i) recognize CSI’s significance as a reliable, empirically observable sign of intelligence,

(ii) point out the general applicability of the concept, and

(iii) provide a probability and information theory based explicitly formal model for quantifying CSI.

(iv) In the current formulation, as at 2005, his metric for CSI, χ (chi), is:

χ = – log2[10 –120 ·ϕS(T)·P(T|H)]

P(T|H)is the probability of being in a given target zone in a search space, on a relevant chance hypothesis, (E.g. Probability of a hand of 13 spades form a shuffled standard deck of cards)

ϕS(T)is a multiplier based on the number of similarly simply and independently specifiable targets (e.g. having hands that are all Hearts, all Diamonds, all Clubs or all Spades)

10–120 is the Seth Lloyd estimate for the maximum number of elementary bit-based operations possible in our observed universe, serving as a reasonable upper limit on the number of search operations.

log2 [ . . . ] converts the modified probability into a measure of information in binary digits, i.e. specified bits. When this value is at least + 1, then we may reasonably infer to the presence of design from the evidence of CSI alone. (For the example being discussed, χ = -361, i.e. The odds of 1 in 635 billions are insufficient to confidently infer to design, on the gamut of the universe as a whole. But, on the gamut of a card game here on Earth, that would be a very different story.) http://www.uncommondescent.com/glossary/

FSCI — “functionally specified complex information” (or, “function-specifying complex information” or — rarely — “functionally complex, specified information” [FCSI])) is a commonplace in engineered systems: complex functional entities that are based on specific target-zone configurations and operations of multiple parts with large configuration spaces equivalent to at least 500 – 1,000 bits; i.e. well beyond the Dembski-type universal probability bound.

In the UD context, it is often seen as a descriptive term for a useful subset of CSI first identified by origin of life researchers in the 1970s – 80′s. As Thaxton et al summed up in their 1984 technical work that launched the design theory movement, The Mystery of Life’s Origin:

. . . “order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity.” [TMLO (FTE, 1984), Ch 8, p. 130.]

So, since in the cases of known origin such are invariably the result of design, it is confidently but provisionally inferred that FSCI is a reliable sign of intelligent design. http://www.uncommondescent.com/glossary/

Image | Posted on by | Tagged , , , , , , | 1 Comment

How Sodium Channels Support Intelligent Design Theory

Choanoflagellate, Monosiga brevicollis

A single-celled choanoflagellate, Monosiga brevicollis

Sodium (Na) is an element that often has a plus sign (+) next to it depicting the fact that it has a positive charge.  Each element in the Periodic Table has a definitive set number of protons, neutrons, and electrons.  Elements in their ionic form are not in their most stable form until they complete their electron valence shells, which occurs by either giving up or grabbing onto electrons from other elements.

Sodium has 11 electrons:

  • 2 in its first energy level (1s2)
  • 8 in the second (2s2 2p6)
  • 1 in the outer energy level (3s1)

Sodium is found to be a positively charged element as a result of being short one electron.  But Sodium is not stable with one lone negatively charged electron spinning around in its outermost level, so Na will always give up its outer electron whenever possible. By doing so, energy level 2 becomes the outermost energy level, and exhibits stability with its complete set of 8 electrons.  This electric property assists in making sodium (Na) an important element in neuron activity and nervous systems in life forms.

Nervous systems and their component neuron cells are a key innovation making communication possible across vast distances between cells in the body, sensory perception, behavior, and complex animal brains.

Researchers from the University of Texas at Austin led by Harold Zakon, professor of neurobiology, and Professor David Hillis coauthored a paper along with graduate student Benjamin Liebeskind that was published in PNAS in May 2011.   Zakon notes, “The first nervous systems appeared in jellyfish-like animals six hundred million years ago or so.”  In order for nervous systems to be possible, their precursor sodium channels would have had to been in place prior to the development of jellyfish.  Zakon confirmed, “We have now discovered that sodium channels were around well before nervous systems evolved.”

According to the University of Texas at Austin press release, sodium channels are an integral part of a neuron’s complex machinery. Sodium channels are described to be “like floodgates lodged throughout a neuron’s levee-like cellular membrane. When the channels open, sodium floods through the membrane into the neuron.”  This generates nerve impulses, and from there complex nervous systems can be derived, all because of the seemingly infinite potential electrical applications that can be derived from the positive charge of sodium molecules.

The Univ. of Texas research team discovered the genes for such sodium channels hiding in a primitive single-celled organism, a choanoflagellate.  Choanoflagellates are Eukaryotes, the supposed evolutionary ancestors of multicellular animals like jellyfish and humans. It’s interesting to note that not only are choanoflagellates unicellular, but they have no neurons either.

The press release further states,

Because the sodium channel genes were found in choanoflagellates, the scientists propose that the genes originated not only before the advent of the nervous system, but even before the evolution of multicellularity itself.

Sodium channel genes are complex.  The Univ. of Texas research team illustrates such a gene in their PNAS paper here:

Sodium-Channel Protein

The image above is Figure 1 in the PNAS paper, which is a hypothetical rendition of secondary structure of a sodium-channel protein.  Transmembrane domains at the top (DI–DIV), their component segments (S1–S6), and their connecting loops (in white) are in view. The pore loops (P loop), which dip down into the membrane, form the ion-selectivity filter. The inactivation gate resides on the long loop between DIII/S6 and DIV/S1.  The middle section illustrates how the domains cluster to form the protein and its pore.  The lower section displays the fine structure of one of the domains with the pore loop in the foreground. The black dots on the pore loops in the (Top) and (Bottom) represent the location of the amino acids, which makes up the pore motif.

Monosiga brevicollis

In this image immediately above, the research indicates that the sodium channel protein is highly conserved in that it existed at nearly the highest known taxa level within the Eukarya domain.  In other words, it’s essentially always existed from the very earliest beginning of the domain Eukarya.

In another study of sodium channels published in Physiological Genomics (May 2011),  Swiss researchers reported the same conclusion.  The story was featured in both Science Daily and PhysOrg exclaiming, “Fluid equilibrium in prehistoric organisms sheds light on a turning point in evolution” as the captioned title.

The Swiss team researched how sodium channels help solve the problem for primitive cells that cannot pump sodium out of their membranes effectively.  The inability to pump sodium was an evolutionary roadblock.   Bernard Rossier (Univ. of Lausanne) figured out how the problem was solved.  A certain subunit of a gene for pumping sodium suddenly “appeared” out of nowhere,  and the rest was history.

In humans, the sodium channel protein (ENaC) traverses a cell’s membrane and facilitates the movement of salt into and out of the cell.  ENaC is regulated by the hormone aldosterone.  The Swiss researchers found that ENaC and Na, K-ATPase, an enzyme that also plays a role in pumping and transporting sodium, were in place before the emergence of multi-celled organisms.

When tracing the alpha, beta, and gamma subunits of ENaC back, the Swiss “team found that the beta subunit appeared slightly before the emergence of Metazoans (multicellular animals with differentiated tissues) roughly 750 million years ago.”

Rossier was unsure as to when the emergence appeared.  Dr. Rossier said that although it is possible that the genes for ENaC originated in the common ancestor of eukaryotes and were lost in all branches except the Metazoa and the Excavates, there is another possibility. There could have been a lateral transfer of genes between N. gruberi and a Metazoan ancestor, one that lived between the last common ancestor of all eukaryotes and the first Metazoans.

Phylogenetic tree of ENaC/degenerin

Phylogenetic tree of ENaC/degenerin

While both studies by the Univ. of Texas and Swiss teams use phylogenetic trees to examine the evolution of these highly conserved proteins and enzymes, the fact is clear that these biochemical systems and complex cellular machinery have been highly conserved and present in species from the earliest dawn of the beginning of the Eukarya domain.  This being the case, the fact that these highly complex systems were required for eukaryote evolution to be possible, and existed from the very beginning of Eukarya cells, the evidence is more supportive of Intelligent Design theory than known mechanisms of evolution.

Posted in MOLECULAR BIOLOGY | Leave a comment

INTELLIGENT DESIGN THEORY EXPLAINED

ID for Dummies
TECHNICAL DEFINITIONS OF ID:

1. MECHANISM: ID as a mechanism in and of itself – Intelligent Design is the action and result of artificial intervention interrupting undirected natural processes, such as natural selection.

2. HYPOTHESIS: ID as a scientific hypothesis in biology – Intelligent Design is the proposition that evolution requires an artificial intervention in addition to natural selection and mutations.

3. SCIENTIFIC THEORY: Intelligent Design Theory in Biology is the scientific theory that artificial intervention is a universally necessary condition of the first initiation of life, development of the first cell, and increasing information in the genome of a population leading to greater complexity evidenced by the generation of original biochemical structures.

Authorities:

* Official Discovery Institute definition: http://www.intelligentdesign.org/whatisid.php
* Stephen Meyer’s definition: http://www.discovery.org/v/1971
* Casey Luskin’s Discussion: http://www.evolutionnews.org/2009/11/misrepresenting_the_definition028051.html
* William Dembski’s definition: http://www.uncommondescent.com/id-defined/

ID Theory does not recognize any designers per se because there are numerous sources of design. Whether there is an ultimate “designer” is a philosophical inference that has nothing much to do with science. ID is a study of information and design. The scientific issue is the source of information or intelligence that lead to design. Design does not necessarily infer a designer, it only is evidence that there is a source of intelligence, which could be entirely artificial.

There is an expanded discussion on the technical definitions of Intelligent Design here.

Conjectures as to Sources of Intelligent Design:

Such artificial sources include:

1. Natural Genetic Engineering, http://www.microbialinformaticsj.com/content/1/1/11

2. Quantum biological hypothesis, http://www.physorg.com/news/2011-01-scientists-erase-energy.html; and this article.

3. Reductionism, http://www.uncommondescent.com/science/2010-coming-down-from-the-reductionism-trip/

4. Problem Solving on a Molecular Level: http://www.sciencedaily.com/releases/2011/01/110125172418.htm

5. Although the front-loading hypothesis offered by theistic evolutionists is rejected by ID proponents, it is still plausible that information might enter in the universe in a similar manner, which would explain events such as the Cambrian Explosion.

6. Extraterrestrials: http://www.youtube.com/watch?v=uUV55M_ncns&feature=related; and this article.

Definition of Irreducible Complexity:

“If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” – Charles Darwin, Origin of Species

Since the publication of Darwin’s Black Box, Behe has refined the definition of irreducible complexity. In 1996 he wrote that “any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional.”(Behe, M, 1996b. Evidence for Intelligent Design from Biochemistry, a speech given at the Discovery Institute’s God & Culture Conference, August 10, 1996 Seattle, WA. http://www.arn.org/docs/behe/mb_idfrombiochemistry.htm).

By defining irreducible complexity in terms of “nonfunctionality,” Behe casts light on the fundamental problem with evolutionary theory: evolution cannot produce something where there would be a non-functional intermediate. Natural selection only preserves or “selects” those structures which are functional. If it is not functional, it cannot be naturally selected. Thus, Behe’s latest definition of irreducible complexity is as follows:

“An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.” (A Response to Critics of Darwin’s Black Box, by Michael Behe, PCID, Volume 1.1, January February March, 2002; iscid.org/)

Quotes taken from: http://www.ideacenter.org/contentmgr/showdetails.php/id/840

One thing that is glaringly evident about my immediately preceding comment is that the heart of IC goes straight to the falsifiability of Darwin as expressly stated by Darwin himself. So, attacking this issue is a delicate matter because otherwise Darwin’s theory could be held as not falsifiable science. If a theory is not falsifiable, then it is not science by definition of science.

A more comprehensive discussion on irreducible complexity is discussed here.

Definition of Information, or Complex Specified Information (aka CSI) :

Intelligence and Information with respect to ID theory:

The scientific method is commonly described as a 5-step process involving observations, hypothesis, experiments, results and conclusion. Intelligent design begins with the observation that intelligent agents produce complex and specified information (CSI). The basic protest of scientists to ID are the ambiguous definitions of information and complexity. When we see in the biological structure-producing DNA machinery the ability to create some structures, and not others, which perform some specific action and not some other specific action, we can legitimately say that we have complex genetic information. When we specify this information as necessary for some function given a preexisting pattern, then we can say it was designed. This is called “complex specified information” or “CSI”.

It is the work of William Dembski who was able to quantify information and complexity so that design and information can be studied in molecular biology as their own independent field of study. The quantified versions of these are specified complexity, and complex specified information (CSI). Design theorists hypothesize that if a natural object was designed, it will contain high levels of CSI. Scientists then perform experimental tests upon natural objects to determine if they contain complex and specified information. One easily testable form of CSI is irreducible complexity, which can be discovered by experimentally reverse-engineering biological structures to see if they require all of their parts to function. When ID researchers find irreducible complexity in biology, they conclude that such structures were designed. Irreducible complexity is falsifiable, and therefore a legitimate scientific hypothesis. Quotes taken from the Discovery Institute here.

The basic protest of scientists to ID are the ambiguous definitions of information, complexity, specified complexity, and complex specified information (CSI). To understand Dembski better, this is a good video to get a gist of what the mathematics are attempting to analyze: http://www.youtube.com/watch?v=vV5vThBcz1g&feature=related The excellent mathematical explanation starts at about the 2:57 mark.  For more on Dembski’s work, see here and here.

So, in other words, when we see in the biological structure-producing DNA machinery the ability to create some structures, and not others, which perform some specific action and not some other specific action, we can legitimately say that we have complex genetic information. When we specify this information as necessary for some function given a preexisting pattern, then we can say it was designed. This is called “complex specified information” or “CSI”.

If a function vital to survival of an organism of a given structure (the pre-existing specified pattern) could occur only if a given set of parts (the complex information) were present, and this complex set of parts were to come into being, then we could justifiably infer it was designed.

Because we can observe intelligence being able to manipulate parts in an innovative manner to create novel CSI, the presence of CSI indicates design at some level, and removes the possibility that a chance-law mechanism such as the mutation-selection mechanism was responsible for it. Novel CSI itself cannot be generated by a chance-law based process, but rather can only be shuffled around. As Stephen Meyer says, “Because we know intelligent agents can (and do) produce complex and functionally specified sequences of symbols and arrangements of matter (i.e., information content), intelligent agency qualifies as a sufficient causal explanation for the origin of this effect.” Quote taken from: http://www.ideacenter.org/contentmgr/showdetails.php/id/832

DEFINITION OF INTELLIGENCE:

There is no special ID definition of intelligence other than what is defined in Webster’s dictionary. ID refers to a scientific research program as well as a community of scientists, philosophers and other scholars who seek evidence of design in nature. The theory of intelligent design holds that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection. Through the study and analysis of a system’s components, a design theorist is able to determine whether various natural structures are the product of chance, natural law, intelligent design, or some combination thereof.

Such research is conducted by observing the types of information produced when intelligent agents act. Scientists then seek to find objects which have those same types of informational properties which we commonly know come from intelligence.

Definition of Intelligent Design Discussed:

What is intelligent design?

Intelligent design refers to a scientific research program as well as a community of scientists, philosophers and other scholars who seek evidence of design in nature. The theory of intelligent design holds that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection. Through the study and analysis of a system’s components, a design theorist is able to determine whether various natural structures are the product of chance, natural law, intelligent design, or some combination thereof. Such research is conducted by observing the types of information produced when intelligent agents act. Scientists then seek to find objects which have those same types of informational properties which we commonly know come from intelligence.

Intelligent design has applied these scientific methods to detect design in irreducibly complex biological structures, the complex and specified information content in DNA, the life-sustaining physical architecture of the universe, and the geologically rapid origin of biological diversity in the fossil record during the Cambrian explosion approximately 530 million years ago. [Source: http://www.intelligentdesign.org/whatisid.php]

The theory of intelligent design (ID) holds that certain features of the universe and of living things are best explained by an intelligent cause rather than an undirected process such as natural selection. ID is thus a scientific disagreement with the core claim of evolutionary theory that the apparent design of living systems is an illusion.

In a broader sense, Intelligent Design is simply the science of design detection — how to recognize patterns arranged by an intelligent cause for a purpose. Design detection is used in a number of scientific fields, including anthropology, forensic sciences that seek to explain the cause of events such as a death or fire, cryptanalysis and the search for extraterrestrial intelligence (SETI). An inference that certain biological information may be the product of an intelligent cause can be tested or evaluated in the same manner as scientists daily test for design in other sciences.

ID is controversial because of the implications of its evidence, rather than the significant weight of its evidence. ID proponents believe science should be conducted objectively, without regard to the implications of its findings. This is particularly necessary in origins science because of its historical (and thus very subjective) nature, and because it is a science that unavoidably impacts religion.

Positive evidence of design in living systems consists of the semantic, meaningful or functional nature of biological information, the lack of any known law that can explain the sequence of symbols that carry the “messages,” and statistical and experimental evidence that tends to rule out chance as a plausible explanation. Other evidence challenges the adequacy of natural or material causes to explain both the origin and diversity of life.

ID RESEARCH

You can access a more comprehensive list of about one hundred peer-reviewed pro-ID research published in science journals here.

Here’s a research paper done by University of Washington that was published in Nature:
http://www.nature.com/nature/journal/v453/n7192/abs/nature06879.html and http://uwnews.washington.edu/ni/article.asp?articleID=40536. It’s significance to ID Theory is discussed here, http://www.uncommondescent.com/intelligent-design/intelligent-design-research-published-in-nature/

2. Here’s a more current one under peer review, http://faculty.arts.ubc.ca/rjohns/spontaneous_4.pdf

The crux of ID Theory is a study of biological information. Up until the present time, ID faced the difficult dilemma of being able to identify “design.” Information must first somehow be quantified before design can be recognized because an unnatural, artificial intervention cannot be demonstrated unless complexity is defined.

Here’s a scientific paper that is up for peer review that might resolve this problem. The paper was published last month in Synthese, and is entitled, “Self-Organisation in Dynamical Systems: A Limiting Result.” The “self” means without external help. The paper shows that physical laws, operating on an initially random arrangement of matter, cannot produce complex objects with any reasonable chance in any reasonable time. The argument is based on a number of original mathematical theorems that are proved in the paper. For a more detailed explanation, see http://www.uncommondescent.com/biology/the-limits-of-self-organisation/#more-15265.

3. And another one very recent, http://www.physorg.com/news204810859.html.

This is a new Darwin-bypassing hypothesis for the evolution of shape, coined “morphogenesis” by Stuart Pivar of the Synthetic Life Lab in NYC. This hypothesis came from the field of research in astrobiology, a branch of aerospace engineering that does research in bionics, artificial bio-components and limbs used in the medical industry. The concept is that the formation of biological organisms might be driven more from developmental dynamics that occur inside the embryo than the genome. You can download the research paper from a link in the article. The article linked below states, “How life originated and evolved is arguably the greatest unsolved problem facing science. Thousands of scientists and scores of organizations and scientific journals are dedicated to discovering mechanisms underlying the mystery.” Although the hypothesis provides a fully naturalistic explanation for the development of form, the causation is entirely independent of genetics or natural selection.

The jury verdict is still not over yet with several hypotheses regarding irreducible complexity. For a comprehensive list, http://www.discovery.org/scripts/viewDB/index.php?command=submitSearchQuery.

4. Here’s an interesting research paper. It’s entitled “The Case Against a Darwinian Origin of Protein Folds” – http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2010.1/BIO-C.2010.1. It is discussed further in the ID blog linked here, http://www.idintheuk.blogspot.com/.

We are still looking to see if we can detect design in the first place. Since there is much to learn in molecular and cellular biology, we don’t know yet what we are going to find. What we do know is that DNA, purine, adenine, cytosine, guanine, thymine, deoxyribose, and phosphate combine chemically to form information. That information is somehow processed. The question is how. It is as if there were at some juncture an intelligent computer programmer that set the computer software to run DNA sequencing in the first place. Perhaps the first cell that caused life on earth is extraterrestrial. We just don’t know. Somehow, the information was downloaded to begin with. The smallest single-celled organism there is requires 300 nucleotides to function.

How did the building blocks of information increase in complexity without guidance before a living organism might exist in the first place? How does Darwin’s theory operate on a molecular level? This is the kind of research that ID will be involved with. Your question is on the other side of the chain, at the result of species and a variation occurring in a population. I don’t think the answer will be found there. Species are a product of evolution. What ID is interested in studying is how the information increased in complexity because mutations alone have not been an adequate answer, but only lead to new hypotheses.

One might think of abiogenesis, but it seems as if that same process continues on throughout all life. In other words, it is my hypothesis that whatever the information processing is that directs cell division to reproduce reptiles, primates, and other life forms including humans is the identical process that occurred in the original abiogenesis event. I believe abiogenesis is very much testable because the information generation that takes place millions of times a minute in present life forms is the identical process that took place to form the original cell. In other words, in the same manner that we learned from embryology how the formation of an embryo portrays a picture of the evolutionary process so likewise the abiogenesis process is constantly reoccurring as well. ID’s search for design might end up being falsified, but I think there will be some surprise discoveries along the way. That’s often the case in the history of science.

ID predicts:
1. Information stored in DNA can be quantified and measured.
2. Biological complexity can be quantified and measured.
3. The blood clotting process is irreducibly complex.
4. Bacteria flagella are irreducibly complex.
5. The cilium is irreducibly complex.
6. The illuminating mechanism of a firefly is irreducibly complex (that one’s my own)
7. There are geologic processes that cause rapid fossilization to occur, probably in about 100 years rather than epochs of time.
8. The fossil record will show morphology as punctuated equilibrium instead of phyletic gradualization.

Posted in ID THE SCIENTIFIC THEORY | Tagged | 26 Comments

Biophysicists Consider Tumor Suppressor Protein 53 To Be A Dimmer Switch

Protein p53 with DNA

Protein p53 with DNA

Protein 53 (or tumor protein 53), is a tumor suppressor protein that in humans is encoded by the TP53 gene.  Since 2003, biophysicists have referenced this protein 53 (p53) to be a rheostat, or dimmer switch in laymen terms.  Although this terminology is only meant to be a metaphor, there is much validity to the fact that p53 functions exactly as a dimmer switch.  Ongoing examination of p53 indicates that it does more than operate as an on/off switch, but regulates ranges of values that are fine-tuned as needed in the cell. 

Molecular Rheostats Control Expression of Genes

In 2003, a review paper published in the January 10 edition of Cell, entitled “Regulating the Regulators: Lysine Modifications Make Their Mark,” stated in the abstract:

Although the composition of this machinery is largely known, mechanisms regulating its activity by covalent modification are just coming into focus.  Here, we review several cases of ubiquitination, sumoylation, and acetylation that link specific covalent modification of the transcriptional apparatus to their regulatory function. We propose that potential cascades of modifications serve as molecular rheostats that fine-tune the control of transcription in diverse organisms [The emphasis is mine].

In their 2003 review paper, Richard Freiman of Howard Hughes Medical Institute and Robert Tijian of UC Berkeley itemize numerous examples of how molecular systems regulate genes.  The paper employs vocabulary terms such as “elaborate,” “intricate,” “exquisite,” and “dramatic.”

The paper begins by asserting the following:

The temporal and spatial control of gene expression is one of the most fundamental processes in biology, and we now realize that it encompasses many layers of complexity and intricate mechanisms. To begin understanding this process, researchers have identified and partly characterized the elaborate molecular apparatus responsible for executing the control of gene expression [emphasis mine].

The paper further describes molecular rheostat function, “The molecular machinery responsible for controlling transcription by RNA polymerase II (RNA pol II) is considerably more complex than anyone had anticipated.” [Emphasis added].  The Freiman and Tjian (2003) paper continues:

“It is not hard to envision that these lysine residues therefore serve as critical molecular switches that can respond to different signals in highly specific ways. In addition, since most proteins contain many lysine residues, transcription factors may undergo multiple modifications simultaneously or in sequential order, pointing to the possibility of generating complex networks of regulatory events.” [Emphasis mine]

Therefore, the Freiman and Tjian (2003) paper concludes:

“Clearly, transcription is exquisitely regulated in all organisms, and one mechanism utilized to achieve such regulation is covalent modification of the transcriptional machinery.  Future studies in diverse organisms and specialized regulatory pathways should further illuminate how transcription factor modification contributes to the elaborate mechanisms of gene regulation.”

Molecular Rheostats In Tadpole Spinal Cord Development

A fresh new research paper just hot off the press is Zhang, Issberner and Sullar, “Development of a spinal locomotor rheostat,” PNAS June 27, 2011, published online before print June 27, 2011, doi: 10.1073/pnas.1018512108.  In Zhang et al (2011), Scottish scientists examining Xenopus tadpole spinal cord development found that the first pools of neurons are all alike.  The neurons rapidly sort out into ventral and dorsal domains within a day.  During tadpole development, the neurons become more specialized as the tadpole requires increased need to swim with greater finesse.

The Zhang et al (2011) PNAS paper reads:

“This unfolding developmental plan, which occurs in the absence of movement, probably equips the organism with the neuronal substrate to bend, pitch, roll, and accelerate during swimming in ways that will be important for survival during the period of free-swimming larval life that ensues. [Emphasis mine]

It is very difficult to avoid inferring from this quote that tadpole development features foresight as to what the tadpole will need, and fine-tunes the “rheostat” of neural specialization to permit the tadpole to interact with its environment.

Protein p53 Acts as a Dimmer Switch

Protein p53 Can Act As A Dimmer Switch Controlling Cell Suicide

Molecular Rheostats In Humans

Medical Express just recently reported that scientists at the Salk Institute for Biological Studies have found clues to the functioning of an important damage response protein in cells. The protein, p53, can cause cells to stop dividing or even to commit suicide when they show signs of DNA damage.  Protein 53 is responsible for much of the tissue destruction that follows exposure to ionizing radiation or DNA-damaging drugs such as the ones commonly used for cancer therapy.

p53 (green) is a tumor suppressor

The tumor suppressor protein p53 (green) and its regulator Mdm2 (red) are shown in breast cancer cells using fluorescent tags.

Geoffrey M. Wahl, professor in the Salk Institute’s Gene Expression Laboratory, describes p53 is “… like a dimmer switch, or rheostat, that helps control the level of p53 activity in a critical stem cell population and the offspring they generate.”  Professor Wahl is the senior author of the study, which appears online in the journal Genes & Development on July 1, 2011. “In principle, controlling this switch with drugs could reduce the unwanted effects from DNA-damaging chemotherapy or radiation treatment, allowing higher doses to be used.”

The regulator protein p53 is a decision maker concern DNA repairs. Protein 53 decides whether repairs should proceed.  If not, p53 commands the affected cell to commit suicide.  Now, how’s that for intelligent design!  The Salk Institute findings show “that a short segment on p53 is needed to fine-tune the protein’s activity in blood-forming stem cells and their progeny after they incur DNA damage [emphasis mine].”

The article notes that short segment is “an evolutionarily conserved regulatory segment of p53.” The term “conserved” is the word biologists use to reference a process that started out fully functioning and perfectly developed in its advanced state initially at the beginning of life, and since then has never evolved.

The Medical Express article goes on to say this about protein 53:

“One problem with p53 is that it apparently evolved to protect the integrity of the genome for future generations, rather than to prolong the lives of individual cells or animals. From the point of view of an animal, p53 sometimes goes too far in killing cells or suppressing growth.” [Emphasis mine].

Excuse me?  Did the medical science journalist just say “evolve to”? Someone needs to remind Darwinists every once in a while that evolution does not evolve “to” do anything.  Natural processes are supposed to be entirely unguided, and to suggest there is foresight implies teleology.  Darwinists continue to remind each other that they need to quit using teleonomic language, but I suppose it is difficult mental gymnastics at times to avoid invoking intelligent design.

The existence of molecular rheostats and dimmer switches that fine-tune processes says nothing about how they came into existence and became fine-tuned.  The only rheostats and dimmer switches we know about, even if they are broken ones, are intelligently designed. 

Posted in MOLECULAR BIOLOGY | 3 Comments

STRAWMAN ARGUMENTS AGAINST ID THEORY

Strawman

Many Arguments Against Intelligent Design Are a Strawman

The most common arguments against Intelligent Design theory are logic fallacies, such as the following:

1. Red herring
2. Unfounded rhetoric.
3. Ad hominem
4. Special Pleading logic fallacy
5. Appeals to argument from ignorance logic fallacy
6. Hand-wave off evidence.
7. Strawman
8. Circular reasoning (especially begging the issue that Irreducible Complexity has not yet been falsified)
9. Irrelevance
10. Illogical inferences and non sequitur
11. False dichotomies
12. Using outdated references and obsolete sources in support of the position of Darwinism to refute ID Theory.

I am not going to reference all the above categories, but focus in on the strawman claims.  None of the Discovery Institute fellows or proponents hold the views that Darwinists portray as the position held by ID proponents.  Take the false dichotomy contention as an example.  It should be understood by everyone regardless of their opinions on the controversy involving Intelligent Design and evolution that these theories are NOT a dichotomy.  Supporting evidence that confirms one theory does not automatically refute the other by default.  Such a notion is a false dichotomy.  There is no contradiction between the two theories.

Intelligent Design is a scientific theory because it attempts to explain how DNA might have originated on planet Earth, and how it increases in the genome of a population from one generation to the next resulting in greater complexity.  Science is a method to acquire knowledge and understanding of phenomena, and scientific theories explain the empirical data that is obtained by scientific research.

Intelligent Design research does not refute evolution, natural selection or variations by genetic mutations. ID scientists adopt the Big Bang, verify the old age of Earth, confirm evolutionary theory, and support common descent. ID Theory proponents repudiate creationism.  Again, unlike creationism, there is no contradiction between evolutionary theory and Intelligent Design.

Intelligent Design simply suggests that there are other mechanisms, at least one, that supplement known natural processes.  Intelligent Design is open to non-natural conjectures, but not supernatural speculation because science must adhere to empiricism and that which is observable.  Natural processes might have been interrupted by an artificial agency, such as a meteorite or by other extraterrestrial interference.

Intelligent Design Theory is a study of how information increases in the genome of a population resulting in greater complexity.  Since ID Theory is a study of information, it overlaps studies in computer science, information theory, and bioinformatics.  The information ID Theory is interested in genetic.  To better understand the design inference drawn from observing DNA, ID Theory studies “Complex Specified Information,” aka CSI.

There is nothing about ID Theory that refutes evolution or is anti-science.  There are many scientists who embrace ID Theory, including those who are nonreligious or atheists.   See the technical definitions for Intelligent Design here.

A problem that one might raise with discussing the strawman arguments against ID Theory is that many people are confused as to just exactly what correctly represents ID Theory.  Some might think that there is no general consensus as to the definition of Intelligent Design.  For example, some might decide that there are so many creationists who are fans of ID, that ID must somehow automatically conform to creationism by default.  This is not true, and should be a simple enough dilemma to resolve.  While it is true that ID Theory has been hijacked by zealous creationists who jump on the ID bandwagon to further the creationism cause, ID does not support creationism, but refutes creationism.

It was the Intelligent Design fellows of the Discovery Institute who were the scientists that coined the term “Intelligent Design,” and also provided the first hypotheses and definitions that comprise with ID Theory is today.  As such is the case, ID Theory should be defined exactly as the Discovery Institute fellows describe it as.  

It should be understood that ID Theory as withstood a barrage of stark criticism since Michael Behe’s book, Darwin’s Black Box (1996).  It was this book that Michael Behe first introduced and coined the term, irreducible complexity.  Intelligent Design gained even greater attention from the highly publicized Dover trial in 2005.  In spite of intense hostile review by the mainstream scientific community, Intelligent Design theory has not been falsified, this is especially true of the hypotheses related to irreducible complexity.  This assertion comes straight from Michael Behe himself, and can be reviewed here.

One strawman that is a favorite among Darwinist who attack ID Theory is the false accusation that Intelligent Design is Creationism.  This false claim is largely based upon the court ruling in the famous 2005 federal case, Kitzmiller v. Dover.  However, nothing could be further from the truth.  One should bear in mind that the Kitzmiller v. Dover case is a ruling as a matter of applying Constitutional law, not science.  

Another source of the imaginary strawman caricature of ID Theory being creationism is the promotion of the false claim by the NCSE. This is the organization that Eugenie Scott is the director of, who routinely campaigns making the false claim that ID Theory is creationism. 

The fact is that although there are similarities between ID Theory and Creationism, each of the two subjects are very much distinguished from the other.  Here are some examples of the most obvious  differences between ID Theory and Creationism.

Biblical Creationism is based upon religion; ID does not recognize any ideology.

Biblical Creationism is based upon the Bible; ID is not.

Biblical Creationism is based upon the Book of Genesis; ID is not.

Biblical Creationism is based upon philosophy; ID is not.

Biblical Creationism holds the Creator is the God of Israel; ID does not.

Biblical Creationism holds the Earth was created in six days; ID does not.

Creation science is primarily based upon geology and the fossil record; ID is based upon biochemistry.

Creation websites quote Bible verses; ID websites do not.

Creationists refute evolutionary theory; ID does not.

Biblical Creationism is an interpretation of the Book of Genesis; ID does not recognize the Bible.

Biblical Creationism requires a deity; ID does not.

Biblical Creationism identifies a designer; ID does not.

Biblical Creationism explains everything; ID does not.

Biblical Creationism offers macroevolution as an hypothesis; ID offers irreducible complexity.

Biblical Creationism relies heavily on the geologic fossil record pointing at gaps, and complaining of absence of transitional life forms; ID shuns arguments from ignorance positions.

Eugenie Scott

Eugenie Scott, Director of the NCSE, Promotes a Strawman Version of ID Theory

ID advocates in favor of evolution; Creationism refutes evolution.

ID advocates in favor of common descent; Creationism refutes common descent.

ID is a study of genetic information; Biblical Creationism studies flood geology

ID is active in applied sciences such as biomimicry; Biblical Creationism does not have an applied science.

How does Intelligent Design describe its own theory?

The technical definitions of Intelligent Design theory are available here.   You can also click here for the official Discovery Institute version of the definition.

ID makes it own affirmative predictions. ID Theory is not interested in what evolution and its mechanisms cannot do. What evolution doesn’t do is irrelevant. What ID Theory is interested in is what and how mechanisms of information increase and design DO perform. This is bioinformatics, and for examples of this work, you might like to review this, http://www.evoinfo.org/.

Intelligent Design research does not refute evolution, natural selection or variations by genetic mutations. ID scientists adopt the Big Bang, verify the old age of Earth, confirm evolutionary theory, and support common descent. ID Theory proponents repudiate creationism.

Posted in LOGIC FALLACIES | Leave a comment