COMPLEX SPECIFIED INFORMATION (CSI) – An Explanation of Specified Complexity

This entry is a sequel to my original blog essay on CSI, which was a more elementary discussion that can be reviewed here.   Complex Specified Information (CSI) is also called specified complexity.  CSI is a concept that is not original to Dr. William Dembski.  Specified Complexity was first noted in 1973 by Origin of Life researcher, Leslie Orgel:

Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity. [ L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added.]

Before beginning this discussion on CSI, it should be understood first as to why it is important.  The scientific theory of Intelligent Design (ID) is based upon important concepts, such as design, information, and complexity.  Design in the context of ID Theory is discussed in terms of CSI.  The following is the definition of ID Theory:

Intelligent Design Theory in Biology is the scientific theory that artificial intervention is a universally necessary condition of the first initiation of life, development of the first cell, and increasing information in the genome of a population leading to greater complexity evidenced by the generation of original biochemical structures.

Authorities:

* Official Discovery Institute definition: http://www.intelligentdesign.org/whatisid.php
* Stephen Meyer’s definition: http://www.discovery.org/v/1971
* Casey Luskin’s Discussion: http://www.evolutionnews.org/2009/11/misrepresenting_the_definition028051.html
* William Dembski’s definition: http://www.uncommondescent.com/id-defined

Please observe that this expression of ID Theory does not appeal to any intelligence or designer. Richard Dawkins was correct when he said that what is thought to be design is illusory. Design is defined by William Demski as Complex Specified Information (CSI).

“Intelligent Design” is an extremely anthropomorphic concept in itself.  The Discovery Institute does not work much with the term “intelligence.” The key to ID Theory is not in the term “intelligence,” but in William Dembski’s work in defining design. And, that is “Complex Specified Information” (CSI). It’s CSI that is the technical term that ID scientists work with. Dembski produced the equations, ran the calculations, and provided a scientifically workable method to determine whether some phenomenon is “designed.” According to Dembski’s book, “The Design Inference” (1998), CSI is based upon statistical probability.

CSI is based upon the theorem:

sp(E) and SP(E) —-> D(E)

When a small probably (SP) event (E) is complex, and

SP(E) = [P (E|I) < the Universal Probability Bound]. Or, in English, we know an event E is a small probably event when the probability of event E given I is less than the Universal Probability Bound. I = All relevant side information and all stochastic hypotheses. This is all in Dembski’s book, “The Design Inference.”

An event E is specified by a pattern independent of E, or expressed mathematically: sp(E). Upper case letters SP(E) are the small probability event we are attempting to determine is CSI, or designed. Lower case letters sp(E) are a prediction that we will discover the SP(E). Therefore, if sp(E) and SP(E) then —-> D(E). D(E) means the event E is not only small probability, but we can conclude it is designed.

Royal Flush

Dembski’s Universal Probability Bound = 0.5 x 10–150, or 0.5 times 10 to the exponent negative 150 power. This is the magic number when one can scientifically be justified to invoke design. It’s been said that using Dembski’s formula, the probability that Dembski states must be matched in order to ascribe design is to announce in advance before dealing that you are going to be dealt 24 Royal Flushes in a row, and then the event plays out just exactly as the advance forecast. In other words, just as intelligence might be entirely illusory, so likewise CSI is nothing other than a mathematical ratio that might not have anything in the world to do with actual design.

The probability of dealing a Royal Flush given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement is 649,739 to 1. According to Dembski, if someone were to be dealt a Royal Flush 24 times in a row upon the advance announcement predicting such a happening would take place, his contentions would be that it was so improbable that someone cheated, or “design” would have had to been involved.

(I should say parenthetically here that I am oversimplifying CSI just for the sake of making this point that we have already imposed upon the term “design” a technical definition that requires no intelligence or design as we understand the everyday normal use of the words. Therefore, for one to take issue with the ingenious marketing term “Intelligent Design” is meaningless because what label the theory is called is irrelevant.  Such a dispute on that issue is nothing other than haggling about nomenclature. The Discovery Institute could have labeled their product by any name. I would have preferred the title, “Bio-information Theory,” but the name is inconsequential.)

What is a very helpful concept to understand about CSI is that just as improbable as it is to be dealt a Royal Flush, so likewise the level of difficulty natural selection is up against to produce what appears to be designed in nature. And, when CSI is observed in nature, which occurs occasionally, then that not only confirms ID predictions, and defies Darwinian gradualism, but also tips a scientist a clue that such might be evidence of additional ID-related mechanisms at work.

It is true that William Dembski’s theorems are based upon an assumption that we can quantify everything in the universe; no argument there. But, he only used that logic to derive his Universal Probability Bound, which is nearly an infinitesimally small number: 0.5 x 10 –150 , or 0.5 times 10 to the exponent negative 150 power.  Do you not think that when a probability is this low that it is a safe bet to invoke corruption of natural processes by an intelligent agency? The number is a useful number. If anyone believes the ratio Dembski submitted is flawed, I would request that person offer a different number that they believe would more accurately eliminate chance in favor of design.

Design theorists are interested in studying complexity.  The more complex the information is, the better.  CSI is best understood as some kind of pattern that is so improbable that the chances of such a configuration occurring by sheer happenstance is extremely small. Dembksi’s formulas and theorems work best when there is an extreme low probability.

It is a given that we do not know everything in the universe, such as intangible variables as dark matter, where neutrinos go when they zip in and out of our universe, and cosmic background radiation. Dembski was aware of these unknown variables, it isn’t as if he ignored them when deriving his theorems. The Universal Probability Bound is not a perfectly absolute number, but it is very much a scientifically workable number no less credible than the variables used to work the equations in support of the Big Bang Theory. So, if one seeks to disqualify CSI on the sole basis that we do not know everything in the universe, then they just eliminated the Big Bang Theory as scientifically viable.

A religious person is welcome to invoke a teleological inference of a deity, but the moment one does that, they have departed from science and are entertaining religion.  CSI might or might not infer design. That’s the whole point of Dembski’s book, “The Design Inference.” In the book he expands on the meaning of CSI, and then proceeds to present his reasoning as to why CSI infers design. While those who reject ID Theory are seeing invisible designers hiding behind every tree, the point Dembski makes is we must first establish design to begin with.

The Delicate Arch in Utah.  Is this bridge a product of design?

The Delicate Arch in Utah is a natural bridge.  It is difficult to debate whether this is an example of specified complexity because some might argue the natural arch is “specified” in the sense that it is a natural bridge.

The arguments in favor that natural arches are specified would emphasize the meaningfulness and functionality of the monument as a bridge.  Also, just the mere fact that attention is be drawn to this particular natural formation is in-and-of-itself a form of specification.

Arguments in opposition that such a natural arch is specified would emphasize the fact that human experience has already observed geological processes are capable of producing such a natural formation.  Also, a natural arch is a one-of-a-kind structure where no two arches resemble each other to such detail that the identity of one could be mistaken for the other.  Finally, the concept emphasized by William Dembski is that specification relates to a prediction.  In other words, had someone drawn this arch in advance on a piece of paper even though they had never seen the actual monument, and then later the land formation is discovered in Utah, which happens to be an exact replica of the drawing, then the design theorists will declare the information is specified.

The meaning of the term specified is very important to understanding CSI.  The term “specified” in a certain sense either directly or indirectly refers to a prediction. If someone deals you a Royal Flush, the pattern would be complex. If you’re dealt a Royal Flush again several consecutive times, someone at the poker table is going to accuse you of cheating. The sequence now is increasing in improbability and complexity.  A Royal Flush is specified because it is a pattern that many people are aware of and have identified in advance.

Now, if you or the dealer ANNOUNCES IN ADVANCE that they are gong to deal you a Royal Flush, and sure enough it happens, that there is no longer any question that the target sequence was “specified.”

Dembski explains how the item of being specified might be best understood in discussing what he calls Conditionally independent patterns.  In applying his Explanatory Filter, Dembski states:

The patterns that in the presence of complexity or improbability implicate a designing intelligence must be independent of the event whose design is in question. Crucial here is that patterns not be artificially imposed on events after the fact. For instance, if an archer shoots arrows at a wall and we then paint targets around the arrows so that they stick squarely in the bull’s-eyes, we impose a pattern after the fact. Any such pattern is not independent of the arrow’s trajectory. On the other hand, if the targets are set up in advance (“specified”) and then the archer hits them accurately, we know it was not by chance but rather by design (provided, of course, that hitting the targets is sufficiently improbable). The way to characterize this independence of patterns is via the probabilistic notion of conditional independence. A pattern is conditionally independent of an event if adding our knowledge of the pattern to a chance hypothesis does not alter the event’s probability under that hypothesis. The specified in specified complexity refers to such conditionally independent patterns. These are the specifications.  [From: William A. Dembski, The Design Revolution: Answering the Toughest Questions About Intelligent Design (Downers Grove, IL: InterVarsity Press, 2004), 81.]

Mount Rushmore is an example of CSI because it relays information that is both complex and specified. More than just complexity of a hillside, it features the specified information of identifiable U.S. presidents.

Is Mount Rushmore a product of natural processes or an intelligent cause?  Most people would likely agree that this rock formation in Black Hills, South Dakota is the result of intelligent design.   I believe that an intelligent agent is responsible for this rock formation. This is based upon reasoning.  Notice that when you determined for yourself this monument is a deliberately sculptured work done by an intelligent cause (I assume you did so) that you did not draw upon a religious view to arrive at that conclusion.

 

The crevices and natural coloration of the rock at Eagle Rock, California portray a remarkable illusion to an eagle in flight. 

Snowflakes are also considered when contemplating CSI.  Snowflakes are very complex, and appear to also be specified.  However, in spite of the great detail, recognizable pattern, and beauty of a snowflake, no two snowflakes are alike.  A snowflake would be specified if a second one were to be found that identically matched the first.

Snowflake1

Snowflake2Snowflake3

 The shapes of snow crystals are due to the laws of physics, which determines their regular geometric 6-pointed pattern.  As such, a snowflake has no CSI whatever because snowflakes are produced by natural processes.  The snowflake is complex, but not complex specified information.  Meteorological conditions also are a factor in the shape formation of a snow crystal.  So, snow is a product of both physical laws and chance.  There’s one thing to note about snow crystals.  Due to the fact that they form from atmospheric conditions governed by laws of physics, the complexity is noted, but they still have a degree of simplicity to them in spite of the infinite number of configurations they might appear as in shape.

William Dembski has been called on snowflakes in the past by his critics who see snowflakes to be just as every bit complex as other simple objects that are known to be designed.   It is true that the complexity of snow crystals make them good candidates for evidence of design.  This is why the concept of being specified is so important.  As intricate as details there might be found with snow, it is the lack of specificity that causes snow crystals to not be CSI.  The shortcut way to test whether snowflakes are designed would be to find two snowflakes that are identical.  The probability for one snowflake to exist is 1 to 1.  It’s the fact that the probability is small for the identical replica to reoccur a second time that would be evidence of design. This is what is meant by being specified.  Specification in the context of ID Theory is not mere intricacy of detailed patterns alone. 

While some ID critics believe snowflakes refute Dembski’s Explanatory Filter because they consider the extreme low probability to infer that snowflakes are designed, I see it as just the opposite.  It is the fact that we know as a given snowflakes are not designed that should lend us confidence in the cases which the Explanatory Filter determines some feature is designed.

This brings up an important point about CSI.  There are many instances where information is highly complex and appears to be specified as well, such as snowflakes.   Information can be arranged in various different degrees of complexity and specificity.   Yet, it is only CSI when the improbability reaches the Universal Probability Bound.  Then what would we call something that just looks like CSI, but it isn’t CSI because a pattern is determined not to be designed upon application of Dembski’s Explanatory Filter?   When CSI just looks like it might be CSI, but it actually isn’t, just like snowflakes, then Dembski calls this Specificational complexity

Dembski explains:

Because they are patterns, specifications exhibit varying degrees of complexity. A specification’s degree of complexity determines how many specificational resources must be factored in when gauging the level of improbability needed to preclude chance (see the previous point). The more complex the pattern, the more specificational resources must be factored in. The details are technical and involve a generalization of what mathematicians call Kolmogorov complexity. Nevertheless, the basic intuition is straightforward. Low specificational complexity is important in detecting design because it ensures that an event whose design is in question was not simply described after the fact and then dressed up as though it could have been described before the fact.

To see what’s at stake, consider the following two sequences of ten coin tosses: HHHHHHHHHH and HHTHTTTHTH. Which of these would you be more inclined to attribute to chance? Both sequences have the same probability, approximately 1 in 1,000. Nevertheless, the pattern that specifies the first sequence is much simpler than the second. For the first sequence the pattern can be specified with the simple statement “ten heads in a row.” For the second sequence, on the other hand, specifying the pattern requires a considerably longer statement, for instance, “two heads, then a tail, then a head, then three tails, then heads followed by tails and heads.” Think of specificational complexity (not to be confused with specified complexity) as minimum description length. (For more on this, see <www.mdl-research.org>.)

For something to exhibit specified complexity it must have low specificational complexity (as with the sequence HHHHHHHHHH, consisting of ten heads in a row) but high probabilistic complexity (i.e., its probability must be small). It’s this combination of low specificational complexity (a pattern easy to describe in relatively short order) and high probabilistic complexity (something highly unlikely) that makes specified complexity such an effective triangulator of intelligence. But specified complexity’s significance doesn’t end there.

Besides its crucial place in the design inference, specified complexity has also been implicit in much of the self-organizational literature, a field that studies how complex systems emerge from the structure and dynamics of their parts. Because specified complexity balances low specificational complexity with high probabilistic complexity, specified complexity sits at that boundary between order and chaos commonly referred to as the “edge of chaos.” The problem with pure order (low specificational complexity) is that it is predictable and thus largely uninteresting. An example here is a crystal that keeps repeating the same simple pattern over and over. The problem with pure chaos (high probabilistic complexity) is that it is so disordered that it is also uninteresting. (No meaningful patterns emerge from pure chaos. An example here is the debris strewn by a tornado or avalanche.) Rather, it’s at the edge of chaos, neatly ensconced between order and chaos, that interesting things happen. That’s where specified complexity sits. [From: William A. Dembski, The Design Revolution: Answering the Toughest Questions About Intelligent Design (Downers Grove, IL: InterVarsity Press, 2004), 81.]

THAT’S WHY I NEVER WALK IN THE FRONT!

This Far Side cartoon is an illustration that Michael Behe uses in his lectures to demonstrate that we can often deduce design.  Even though there is no evidence of any trappers in sight, the scene is obvious that the snare was intentionally designed, as such machinery does not naturally exist by sheer happenstance.  Behe also makes the point that there was no religious contemplation required to conclude that someone deliberately set this trap even though the source that assembled the machine is absent.

This next image is extremely graphic, but illustrates the same point.  Here, a Blue Duiker as been trapped in a snare.

Sometimes people just can’t decide whether a formation of information is a result of happenstance or intelligence.  A perfect example of what looks like design but might not be is the monuments on Mars.   Are the monuments on Mars caused by chance or design?  Are these formations on the planet natural processes or artificial?  Here’s some more interesting images that help someone better understand CSI in a simple way.

It’s also interesting to note that those antagonists who so quickly scoff at ID because of the unfair inference of designers are automatically conceding design as a given. The teleological inference works both ways, if design points to a designer, then designers require design. Without design, a designer does not exist.

As such, if one desires to oppose ID Theory, a preferable argument would be to insist design does not appear in nature, and abandon the teleological inference.

Here’s more on Complex Specified Information (CSI):

* From Discovery Institute, http://www.ideacenter.org/contentmgr/showdetails.php/id/832

* By Dembski himself, http://www.designinference.com/documents/02.02.POISK_article.htm

William Dembski’s book, “The Design Inference” (http://www.designinference.com/desinf.htm).   The Discovery Institute has written CSI here and here.

Darwinian mechanisms (which are based upon chance) will most likely not be the cause of CSI because CSI is by definition a small probability event.  CSI is not zero probability, it is small probability.  There is still a possibility that Darwinian mechanisms could produce CSI, but CSI is more likely to be caused by something that replaces the element of chance.  Darwinian mechanisms are based upon chance.  CSI is a low probability ratio that exposes the absence of chance.  Whatever the absent of chance is (call it intelligence, design, artificial interruption, quantum theory, an asteroid, abiogenesis, RNA self-replication, unknown molecular pre-life molecular configuration, epigenetics, or whatever) is the most likely cause of CSI.  As such, it is assume by ID scientists that CSI = design.

In another book written by Dembski, “Why Specified Complexity Cannot Be Purchased without Intelligence,” Dembski explains why he thinks that CSI is also linked with intelligence.   He further discusses his views here.

CSI is a mathematical ratio of probability that exposes a small probability event that reduces chance from the equation.  And, regardless of what you substitute to replace in the vacuum, the ID scientists substitute the design.  So, in ID Theory, whenever the word “design” appears, it means CSI.  And, therefore it is ridiculous and false to impose designers into the context of ID Theory because the ID definition of design is none other than CSI.

The point is that ID scientists define design as CSI.  Therefore, skeptics of ID Theory should cease invoking designers because all design means in terms of ID Theory is the mathematical absence of chance, which is mathematically expressed in terms of a low probability ratio.

CSI is an assumption, not an argument.  CSI is an axiom postulated up front based upon mathematical theorems.  It’s all couched in math.  Unless the small probably ratio reaches zero, then no one working out the calculations is going to say “cannot.”  CSI is assumed to be design, and it is assumed natural causes DON’T generate CSI because CSI by definition is a small probability event that favors the absence of chance.

We cannot be certain the source is an intelligent agency.  CSI is based upon probabilities.  There are many who credit Darwinian evolution to be the source of complexity.  This is illogical when running the design theorem calculations.  But, it is not impossible.  As Richard Dawkins has noted before, design can be illusory.  The hypothesis that Darwinian evolution is a cause for some small probability event SP (E) could be correct, but it is highly improbable according to the math.

One common demonstration to help people understand how CSI works is to take a letter sequence. This can be done with anything, but the common example is this pattern:

METHINKS•IT•IS•LIKE•A•WEASEL

This letter arrangement is used the most often to describe CSI because the math has already been worked out. The bullets represent spaces. There are 27 possibilities at each location in a symbol string 28 characters in length. If natural selection were entirely random it would take 1 x 1040 (that’s 10 to the 40th exponent, or 1 with 40 zeroes to get to the decimal point). It’s a small probability. However, natural selection (NS) is smarter than that, and Richard Dawkins has shown how NS achieves similar odds in nature in an impressive 43 attempts to get the answer correct (http://evoinfo.org/papers/ConsInfo_NoN.pdf).

In this example, the probability was only 1 x 1040. CSI is an even much higher number than that. If you take a pattern or model, such as METHINKS•IT•IS•LIKE•A•WEASEL, and you keep adding the information, you soon reach probabilities that are within the domain of CSI.

For more of Dembski’s applications using his theorems, you might like to reference these papers:

http://evoinfo.org/papers/2010_TheSearchForASearch.pdf

http://marksmannet.com/RobertMarks/REPRINTS/2010-EfficientPerQueryInformationExtraction.pdf

This is a continuation of Claude Shannon’s work. One of the most important contributors to ID Theory is American mathematician Claude Shannon (http://en.wikipedia.org/wiki/Claude_Shannon), who is considered to be the father of Information Theory (http://en.wikipedia.org/wiki/Information_Theory). Essentially, ID Theory is a sub-theory of Information Theory in the field of Bioinformatics. This is one of Dembski’s areas of expertise, http://evoinfo.org/publications/.

When Robert Deyes wrote a review on Stephen Meyer’s “Signature In The Cell,” he noted “When talking about ‘information’ and its relevance to biological design, Intelligent Design theorists have a particular definition in mind.”   Stephen Meyer explained in “Signature In The Cell” that information is: “the attribute inherent in and communicated by alternative sequences or arrangements of something that produce specific effects” (p.86).

Shannon was instrumental in the development of computer science. He invented the first robotic mouse, and computer chess. When Shannon unveiled his theory for quantifying information it included several axioms, one of which is information is inversely proportional to uncertainty. Similarly, design can be contrasted to chance.

It was Peter Medawar that referenced these theorems as “The Law of Conservation of Information.” Dembski’s critics have accused his applications as being too modified to be associated with the Law of Conservation of Information. There is no dispute that Dembski applied the modifications. He has modified them to apply to biology.

FROM UNCOMMON DESCENT GLOSSARY:

The Uncommon Descent blog further notes the following in re CSI:

The concept of complex specified information helps us understand the difference between (a) the highly informational, highly contingent aperiodic functional macromolecules of life and (b) regular crystals formed through forces of mechanical necessity, or (c) random polymer strings. In so doing, they identified a very familiar concept — at least to those of us with hardware or software engineering design and development or troubleshooting experience and knowledge. Furthermore, on massive experience, such CSI reliably points to intelligent design when we see it in cases where we independently know the origin story.

What Dembski did with the CSI concept starting in the 1990′s was to:

(i) recognize CSI’s significance as a reliable, empirically observable sign of intelligence,

(ii) point out the general applicability of the concept, and

(iii) provide a probability and information theory based explicitly formal model for quantifying CSI.

(iv) In the current formulation, as at 2005, his metric for CSI, χ (chi), is:

χ = – log2[10 –120 ·ϕS(T)·P(T|H)]

P(T|H)is the probability of being in a given target zone in a search space, on a relevant chance hypothesis, (E.g. Probability of a hand of 13 spades form a shuffled standard deck of cards)

ϕS(T)is a multiplier based on the number of similarly simply and independently specifiable targets (e.g. having hands that are all Hearts, all Diamonds, all Clubs or all Spades)

10–120 is the Seth Lloyd estimate for the maximum number of elementary bit-based operations possible in our observed universe, serving as a reasonable upper limit on the number of search operations.

log2 [ . . . ] converts the modified probability into a measure of information in binary digits, i.e. specified bits. When this value is at least + 1, then we may reasonably infer to the presence of design from the evidence of CSI alone. (For the example being discussed, χ = -361, i.e. The odds of 1 in 635 billions are insufficient to confidently infer to design, on the gamut of the universe as a whole. But, on the gamut of a card game here on Earth, that would be a very different story.) http://www.uncommondescent.com/glossary/

FSCI — “functionally specified complex information” (or, “function-specifying complex information” or — rarely — “functionally complex, specified information” [FCSI])) is a commonplace in engineered systems: complex functional entities that are based on specific target-zone configurations and operations of multiple parts with large configuration spaces equivalent to at least 500 – 1,000 bits; i.e. well beyond the Dembski-type universal probability bound.

In the UD context, it is often seen as a descriptive term for a useful subset of CSI first identified by origin of life researchers in the 1970s – 80′s. As Thaxton et al summed up in their 1984 technical work that launched the design theory movement, The Mystery of Life’s Origin:

. . . “order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity.” [TMLO (FTE, 1984), Ch 8, p. 130.]

So, since in the cases of known origin such are invariably the result of design, it is confidently but provisionally inferred that FSCI is a reliable sign of intelligent design. http://www.uncommondescent.com/glossary/

Advertisements
Image | This entry was posted in COMPLEX SPECIFIED INFORMATION (CSI) and tagged , , , , , , . Bookmark the permalink.

One Response to COMPLEX SPECIFIED INFORMATION (CSI) – An Explanation of Specified Complexity

  1. I quote:

    CSI is based upon statistical probability.

    Probabilities only work for repeating events, like throwing a dice, picking card, meateors hitting the Earth. A unique event has no probability defined,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s