UNIVERSAL COMMON DESCENT

THE SCIENTIFIC METHOD IS BASED UPON TESTING A FALSIFIABLE HYPOTHESIS:

This essay sets a case in favor of the scientific theory of universal common ancestry.  One of the most dubious challenges to universal common descent I have reviewed is Takahiro Yonezawa and Masami Hasegawa, “Some Problems in Proving the Existence of the Universal Common Ancestor of Life on Earth,” The Scientific World Journal, 2011.  While there is nothing wrong with the data and points raised in this article, it is not the objective of science to “prove” a theory.  Also, the objective of identifying the universal common ancestor is not the the focus of the theory of universal common descent.

The scientific method is based upon testing a falsifiable hypothesis.  In science, the researchers do not experiment to “prove” theories, they test an hypothesis in order to falsify the prediction.  All we can do is continue to test gravity to determine if Einstein’s predictions were correct. We can never “prove” Einstein was right because his equations might not work everywhere in the universe, such as inside a black hole.

When an experiment fails to falsify the hypothesis, all we can conclude is that the theory is confirmed one more time. But, the theory is never ultimately proven. If it were possible to prove a theory to be ultimately true, like a law of physics, then it is not a scientific theory because a theory or hypothesis must be falsifiable.

The theory of UCD is challenged with formal research by multiple biology and biochemistry departments around the world. There is a substantial amount of scientific literature on this  area of research.  The fact that after all this time the proposition of UCD has not been falsified is a persuasive case supporting an argument the claim has merit.   That’s all science can do.

I make this point because when we explore controversial topics far too often some individuals make erroneous objections, such as requiring empirical data to “prove” some conjecture.  That is not how science works.  All the scientific method can do is demonstrate a prediction is false, but science can never prove a theory to be absolutely true.

Having said that, there are scientists who nevertheless attempt to construct a complete Tree of Life.  This is done in an ambitious attempt to “prove” the theory is true, even to the fanciful hopes of identifying the actual universal common ancestor.   Much of the attacks on the theory of common descent are criticisms noting the incompleteness of the data.  But, an incomplete tree does not falsify the theory.

This is important to understand because there is no attempt being made here to prove universal common descent (UCD).  All that is going to be shown here is that the UCD as a scientific theory has not been falsified, and remains an entirely solid theory regardless as to whether UCD is actually true or not.

IS UNIVERSAL COMMON ANCESTRY FALSIFIABLE?

What would it take to prove universal common descent false?

Common ancestry would be falsified if we discovered a form of life that was not related to all other life forms. For example, finding a life form that does not have the nucleic acids (DNA and RNA) would falsify the theory. Other ways to falsify Univ. Common Descent would be:

• If someone found a unicorn, that would falsify universal common descent.

• If someone found a Precambrian rabbit would likely falsify universal common descent.

• If it could be shown mutations are not inherited by successive generations.

One common misunderstanding that people have about science is they have this idea that science somehow proves certain predictions to be correct.

All life forms fall within nested hierarchy. Of the hundreds of thousands of specimens that have been applied testing, every single one of them fall within nested hierarchy, or their evolution phylogenetic tree is still unknown and not sequenced yet.

SCIENCE PAPERS THAT SUPPORT UNIVERSAL COMMON DESCENT:

Here is just a tip of the iceberg of science papers that indicated the validity of the UCD:

• Steel, Mike; Penny, David (2010). “Origins of life: Common ancestry put to the test“. Nature 465 (7295): 168–9.

• A formal test of the theory of universal common ancestry (13 May 2010). “A formal test of the theory of universal common ancestry.” Nature 465 (7295): 219–222.

• Glansdorff, N; Xu, Y; Labedan, B (2008). “The last universal common ancestor: emergence, constitution and genetic legacy of an elusive forerunner.” Biology direct 3 (1): 29.

Céline Brochier, Eric Bapteste, David Moreira and Hervé Philipp, “Eubacterial phylogeny based on translational apparatus proteins,” TRENDS in Genetics Vol.18 No.1 January 2002.

• Baldauf, S. L., Roger, A. J., Wenk-Siefert, I., and Doolittle, W. F. (2000) “A kingdom-level phylogeny of eukaryotes based on combined protein data.” Science 290: 972-7.

• Brown, J. R., Douady, C. J., Italia, M. J., Marshall, W. E., and Stanhope, M. J. (2001) “Universal trees based on large combined protein sequence data sets.” Nature Genetics 28: 281-285.

The above are often cited in support of Univ. Common Descent. For anyone to suggest these papers have been overturned or outdated requires documentation.

Darwin's Sketch of a Cladogram

Darwin’s First Sketch of a Cladogram

NESTED HIERARCHIES AND BASIC PHYLOGENETICS:

A logical prediction that would be inspired by common descent is that all biological development will resemble a tree, which is called the Tree of Life. Evolution then will specifically generate unique, nested, hierarchical patterns of a branching scheme. Most existing species can be organized rather easily in a nested hierarchical classification.

Figure 1. Parts of a Phylogenetic Tree
Figure 1. Parts of a Phylogenetic Tree

Figure 1 displays the various parts of a phylogenetic tree.  Nodes are where branches meet, and represent the common ancestor of all taxa beyond the node. Any life form that has reproduced has a node that will fit properly onto the phylogenetic tree. If two taxa share a closer node than either share with a third taxon, then they share a more recent ancestor.

Falsifying Common Descent:

It would be very problematic if many species were found that combined characteristics of different nested groupings. Some nonvascular plants could have seeds or flowers, like vascular plants, but they do not. Gymnosperms (e.g. conifers or pines) occasionally could be found with flowers, but they never are. Non-seed plants, like ferns, could be found with woody stems; however, only some angiosperms have woody stems.

Conceivably, some birds could have mammary glands or hair; some mammals could have feathers (they are an excellent means of insulation). Certain fish or amphibians could have differentiated or cusped teeth, but these are only characteristics of mammals.

A mix and match of characters would make it extremely difficult to objectively organize species into nested hierarchies. Unlike organisms, cars do have a mix and match of characters, and this is precisely why a nested hierarchy does not flow naturally from classification of cars.

Figure 1.  Sample Cladogram
Figure 2. Sample Cladogram

In Figure 2, we see a sample phylogenetic tree. All a scientist has to do is find a life form that does not fit the hierarchical scheme in proper order. We can reasonably expect that yeasts will not secrete maple syrup.  This model allows us the logical basis to predict that reptiles will not have mammary-like glands.  Plants won’t grow eyes or other animal-like organs. Crocs won’t grow beaver-like teeth. Humans will not have gills or tails.

Reptiles will not have external skeletons. Monkeys will not have a marsupial-like pouch. Amphib legs will not grow octopus-like suction cups.Lizards will not produce apple-like seeds. Iguanas will not exhibit bird feathers, and on it goes.

The phylogenetic tree provides a basis to falsify common descent if, for example, rose bushes grow peach-like fuzz or sponges display millipede-like legs.  We will not find any unicorns or “crockoducks.”  There should never be found any genetic sequences in a starfish that would produce spider-like fangs.  An event such as a whales developing shark-like fins would falsify common descent.

While these are all ludicrous examples in the sense that such phenomena would seemingly be impossible, the point is that any life form found with even the slightest cross-phylum, cross-family, cross-genus kind of body type would instantly falsify common descent. And, it doesn’t have to be a known physical characteristic I just listed. It could be a skeletal change in numbers of digits, ribs, or configurations.  There is an infinite number of possibilities that if such a life form was unclassifiable, the theory of universal common descent would be falsified.

The falsification doesn’t have to be anything as dramatic as these examples. It could be something like when NASA thought it has discovered a new form of life when there was thought to be an arsenic-based bacteria at California’s Mono Lake. This would have been a good candidate to see if the life form had entirely changed its genetic code. Another example would be according to UCD none of the thousands of new and previously unknown insects that are constantly being discovered will have non-nucleic acid genomes.

Certainly, if UCD is invalid, there must be life forms that exist that acquire their characteristics aside from their parents, and if this is so, their DNA will expose the anomaly. It is very clear when reviewing phylogenies that there is an unmistakeable hierarchical structure indicating ancestral lineage. And all phylogenies are like this without exception. All I ask for was there to be simply one submitted that shows a life form does not have any parents, or it’s offspring did not inherit its traits.  If such were the case, then there should be evidence of this.

METHODOLOGY OF FALSIFICATION:

For the methodology to determine nested hierarchies today, the math gets complicated in order to ensure that the results are accurate.  In this next study, as a discipline, phylogenetics is becoming transformed by a flood of molecular data. This data allows broad questions to be asked about the history of life, but also present difficult statistical and computational problems. Bayesian inference of phylogeny brings a new perspective to a number of outstanding issues in evolutionary biology, including the analysis of large phylogenetic trees and complex evolutionary models and the detection of the footprint of natural selection in DNA sequences.

As this discipline continues to be applied to molecular phylogenies, the prediction is continually confirmed, not falsified. All it would take is one occurrence for the mix and match issue to show a sequence out of order without a nested hierarchy and evolutionary theory would be falsified.

“ALL SCIENTIFIC THEORIES ARE SUPPOSED TO BE CHALLENGED”

Of course Charles Darwin’s hypothesis of UCD has been questioned.  All scientific predictions are supposed to be challenged. There’s a name for it. It’s called an experiment. The object is to falsify the hypothesis by testing it. If the hypothesis hold ups, then it is confirmed, but never proven. The best science gives you is falsification. UCD has not been falsified, but instead is extremely reliable. 

When an hypothesis is confirmed after repeated experimentation, the science community might upgrade the hypothesis to the status of a scientific theory.   A scientific theory is when an hypothesis that is continuously affirmed after substantial repeated experiments has significant explanatory power to better understand phenomena. 

Here’s another paper in support of UCD, Schenk, MF; Szendro, IG; Krug, J; de Visser, JA (Jun 2012). “Quantifying the adaptive potential of an antibiotic resistance enzyme.”  Many human diseases are not static phenomena, but are constantly evolving, such as viruses, bacteria, fungi and cancers. These pathogens evolve to be resistant to host immune defences, as well as pharmaceutical drugs. (A similar problem occurs in agriculture with pesticides).

This Schenk 2012 paper analyzes whether pathogens are evolving faster than available antibiotics, and attempts to make better predictions of the evolvability of human pathogens in order to devise strategy to slow or circumvent the destructive morphology at the molecular level. Success in this field of study is expected to save lives.

Antibiotics are an example of the necessity to apply phylogenetics in order to implement medical treatments and manufacture pharmaceutical products. Another application is demonstrating irreducible complexity. That is established by studying homologies of different phylogenies to determine whether two systems share a common ancestor. If one has no evolutionary pathway to a common ancestor, then it might be a case for irreducible complexity.

Another application is forensic science. DNA is used to solve crimes. One case involved a murder suspect being found guilty because he parked his truck under a tree. A witness saw the truck at the time of the crime took place. The suspect was linked to the crime scene because DNA from seeds that fell out of that tree into the bed of the truck positively identified the tree from no other tree in the world.

DNA allows us to positively determine ancestors, and the margin for error is infinitesimally small.

TWIN NESTED HIERARCHY:

The term “nested” refers to the confirmation of the specimen being examined as properly placed in hierarchy on both sides of reproduction, that is both in relation to its ancestors and progeny.  The term “twin” refers to the fact that nested hierarchy can be determined by both (1) genotype (molecular and genome sequencing analysis) and (2) phenotype (visual morphological variations).

We can ask these four questions:

1. Does the specimen fit in a phenotype hierarchy on the ancestral side? Yes or no?

2. Does the specimen fit in a phenotype hierarchy relative to its offspring? Yes or no?

If both answers to 1 and 2 are yes, then nested hierarchy re phenotype is established.

3. Does the specimen fit in a genotype hierarchy on the ancestral side? Yes or no?

4. Does the specimen fit in a genotype hierarchy relative to its offspring? Yes or no?

If both answers to 3 and 4 are yes, then nested hierarchy re genotype is established.

All four (4) answers should always be yes every time without exception. But, the key is genotype (molecular) because the DNA doesn’t lie. We cannot be certain from visual morphological phenotype traits. But, once we sequence the genome, there is no uncertainty remaining.

phylogenetic-tree-big

CLADES AND TAXA:

A clade is essentially the line that begins at the trunk of the analogous tree, for common descent that would be the Tree of Life, and works it’s way from branches, limbs, to stems, and then a leaf or the extremity (representing a species) of the branching system. A taxon is a category or group. The trunk would be a taxon. The lower branches are a taxon. The higher limbs are a different taxon. It’s a rough analogy, but that’s the gist of it.

THE METHODOLOGY USED TO FALSIFY COMMON DESCENT IS BASED UPON NESTED HIERARCHY:

Remember that nucleic acids (DNA) are the same for all life forms, so that alone is a case for the fact that common descent goes all the way back to a single cell.

Mere similarity between organisms is not enough to support UCD. A nested classification pattern produced by a branching evolutionary tree process is much more specific than simple similarity.  A friend of mine recently showed me her challenge against UCD using a picture of the phylogeny of sports equipment:

Cladogram of sports ballsI pointed out to her that her argument is a false analogy. Classifying physical items will not result in an objective nested hierarchy.

For example, it is impossible to objectively classify in nested hierarchies the elements in the Periodic Table, planets in our Solar System, books in a library, cars, boats, furniture, buildings, or any inanimate object. Non-life forms do not reproduce, and therefore do not pass forward inherited traits from ancestors.

The point in using the balls used in popular sports attempts to argue that it is trivial to classify anything subjectively in a hierarchical manner.  The illustration of the sports balls showed that classification is entirely subjective. But, this is not true with biological heredity. We KNOW from DNA whether or not a life form is the parent of another life form!

With inanimate objects, like cars, they could be classified hierarchically, but it would be subjective, not objective classification. Perhaps the cars would be organized by color, and then by manufacturer. Or, another way would be to classify them by year of make or size, and then color. So, non-living items cannot be classified using a hierarchy because the system is entirely subjective. But, life forms and languages are different.

In contrast to being subjective like cars, human languages do have common ancestors and are derived by descent with modification.  Nobody would reasonably argue that Spanish should be categorized with German instead of with Portuguese. Like life forms, languages fall into objective nested hierarchies.  Because of these facts, a cladistic analysis of sports equipment will not produce a unique, consistent, well-supported tree that displays nested hierarchies.

Carl Linnaeus, the famous Swedish botanist, physician, and zoologist, is known for being the man who laid the foundations for the modern biological naming scheme of binomial nomenclature. When Linnaeus invented the classification system for biology, he discovered the objective hierarchical classification of living organisms.   He is often called the father of taxonomy.  Linnaeus also tried to classify rocks and minerals hierarchically, but his efforts failed because the nested hierarchy of non-biological items was entirely subjective.

“DNA doesn’t lie.”

Hierarchical classifications for inanimate objects don’t work for the very reason that unlike organisms, rocks and minerals do not evolve by descent with modification from common ancestors. It is this inheritance of traits that provides an objective way to classify life forms, and it is nearly impossible for the results to be corrupted by humans because DNA doesn’t lie.

Caveat: Testing nested hierarchy for life forms works, and it confirms common descent. There is a ton of scientific literature on this topic, and it all supports common descent and Darwin’s predictions. Again, there is no such thing as a design-inspired prediction for why life forms all conform to nested hierarchy. There is only one reason why they do: Universal Common Ancestry.

The point with languages is that they can be classified objectively to fall within nested hierarchies because they are inherited and passed on by descent with modification. No one is claiming that languages have a universal common ancestor, even if it they do, because its beside the point.

In this paper, Kiyotaka Takishita et al (2011), “Lateral transfer of tetrahymanol-synthesizing genes has allowed multiple diverse eukaryote lineages to independently adapt to environments without oxygen,” published in Biology Direct, the phylogenies of unicellular eukaryotes are examined to ascertain how they acquire sterols from bacteria in low oxygen environments. In order to answer the question, the researchers had to construct a detailed cladogram for their analysis. My point here is that DNA doesn’t lie. All life forms fall within a nested hierarchy, and there is no paper that exists in scientific literature that found a life form that does not conform to a nested hierarchy.

CladogramThe prediction in this instance is that if evolution (as first observed by Charles Darwin) occurs, then all life might have descended from a common ancestor. This is not only a hypothesis, but is the basis for the Scientific Theory of Universal Common Descent (UCD).

There is only one way I know of to falsify the theory of UCD, and that is to produce a life form that does not conform to nested hierarchy. All it takes is one.

DOES A COMB JELLY FALSIFY COMMON DESCENT?

One person I recently spoke to regarding this issue suggested that a comb jelly appears to defy common descent.  He presented me this paper published in Nature in support of his view.  The paper is entitled, “The ctenophore genome and the evolutionary origins of neural systems” (Leonid L. Moroz, et al, 2014). Comb jellies might appear to be misclassified and not conform to a hierarchy, but phylogenetically they fit just fine.

There does seem to be an illusion going back to the early Cambrian period that the phenotype of life forms do not fall within a nested hierarchy. But, their genotypes still do. The fact that extremely different body types emerge in the Cambrian might visually suggest they do not conform to a nested hierarchy, the molecular analysis tells a much different story and confirms that they do.

To oppose my position, all that is necessary is for someone to produce one solitary paper published in a science journal that shows the claim for UCD to be false. Once a molecular analysis and the phylogenies are charted on a cladogram, all life forms, I repeat all life forms conform to nested hierarchies, and there is not one single exception. If there is, I am not aware of the paper.

In regarding the comb jelly discussed in Moroz (2014), if someone desires to submit the comb jelly does not fit within a nested hierarchy, there is no content in this paper that supports this view.

For example, From Figure 3 in the article,

“Predicted scope of gene loss (blue numbers; for example, −4,952 in Placozoa) from the common metazoan ancestor. Red and green numbers indicate genes shared between bilaterians and ctenophores (7,771), as well as between ctenophores and other eukaryotic lineages sister to animals, respectively. Text on tree indicates emergence of complex animal traits and gene families.”

The authors concluded common ancestry and ascribe their surprise regarding the comb jelly to convergence, which has nothing to do with common ancestry.

The article refers to and assumes common metazoan ancestry.  The common ancestry of the comb jelly is never once questioned in the paper.  The article only ascribes the new so-called genetic blueprint to convergence.  The paper both refers to and assumes common ancestry several times, and even draws up a cladogram for our convenience to more readily understand it’s phylogeny, which is based upon common descent.

The paper repeatedly affirms the common ancestry of the comb jelly, and only promotes a case for convergent evolution. It is an excellent study of phylogeny of the comb jelly. There is nothing about the comb jelly that defies nested hierarchy. If there was, common descent would be falsified.

Universal Common Descent (UCD) is a scientific theory that all life forms descended from a single common ancestor.  The theory is falsified by demonstrating the node (Figure 1) of any life form upon examination of its phylogeny does not fit within an objective nested hierarchy based upon inheritance of traits from one generation to the next via successive modifications. If someone desires to falsify UCD all they need to do is just present the paper that identifies such a life form. Of course, if such a paper existed the author would be famous.

Any other evidence regardless of how much merit it might have to indicate serious issues with UCD does nothing to falsify UCD. If this claim is challenged, please (a) explain to me why, and (b) show me the scientific literature that confirms the assertion.

OTHER CHALLENGES TO THE ISSUES AND PROBLEMS WITH UCD DO NOT FALSIFY DARWIN’S PREDICTION AS A SCIENTIFIC THEORY:

One paper that is often cited to W. Ford Doolittle, “Phylogenetic Classification and the Universal Tree,” Science 25 June 1999. This is Doolittle (1999). I already cited Baldauf, S. L., Roger, A. J., Wenk-Siefert, I., and Doolittle, W. F. (2000) above. Doolittle is very optimistic about Common Descent, and does nothing to discourage its falsification. In fact, the whole point of Doolittle’s work is to improve on the methodology so that future experimentation increases the reliability of the results. 

In figure 3 of the paper, Doolittle presents a drawing as to what the problems are during the early stages of the emergence of life:

reticulated treeIn Doolittle 1999, there are arguments fully discussed as to what the problems are regarding lateral gene transfer (LGT), and how it distorts the earlier history of life.  But, once solving for the LGT, the rest of the tree branches off as would be expected. 

Thanks to lateral gene transfer, taxonomists have identified 25 genetic codes all of which have their own operating systems, so to speak, for the major phyla and higher taxa classifications of life. They’re also called mitochondrial codes, and are non-standard to other clades in the phylogenetic tree of life.

The question is, do any of these 25 non-standard codes weaken the claim for a common ancestor for all life on earth? The answer would be no because the existence of non-standard codes offers no support for a ‘multiple origins’ view of life on earth.

Lineages that exhibit these 25 “variants” as they are also often called are clearly and unambiguously related to organisms that use the original universal code that reverts back to the hypothetical LUCA. The 25 variant branches of life are distributed as small ‘twigs’ super early at the very dawn of life within the evolutionary tree of life. There is a diagram of this in my essay. I will provide it below for your convenience.

Anyone is welcome to disagree, but to do so requires the inference that, for example, certain groups of ciliates evolved entirely separately from the rest of life, including other types of ciliates. The hypothesis that the 25 mitochondrial codes are originally unique and independent to a LUCA is simply hypothetical, and there is no paper I am aware of that supports this conjecture. There are common descent denying creationists who argue this is so, but the claim is untenable and absent in the scientific literature.

Although correct, the criticism that the data breaks down the tree does nothing to falsify universal common descent.  In order to falsify UCD one must show that a life form exists that does not conform to a nested hierarchy.  

The fact that there are gaps in the tree, or that the tree is incomplete, or that there is missing phylogenetic information, or that there are other methodological problems that must be solved does not change the fact that the theory remains falsifiable. And, I already submitted the simple criteria for falsification, and it has nothing to do with seeing how complete one can construct the Tree of Life.

The abstract provides an optimistic summary of the findings in Doolittle 1999:

“Molecular phylogeneticists will have failed to find the “true tree,” not because their methods are inadequate or because they have chosen the wrong genes, but because the history of life cannot properly be represented as a tree. However, taxonomies based on molecular sequences will remain indispensable, and understanding of the evolutionary process will ultimately be enriched, not impoverished.”

There many challenges to universal common descent, but to date there is no life form that has been found that defies conforming to nested hierarchy.  Some of challenges to common descent relate to early when life emerged, such as this 2006 paper published in Genome Biology, authored by Tal Dagan and William Martin, entitled, “The Tree of One Percent.”

Similar problems are addressed in Doolittle 2006, The paper reads,

“However, there is no independent evidence that the natural order is an inclusive hierarchy, and incorporation of prokaryotes into the TOL is especially problematic. The only data sets from which we might construct a universal hierarchy including prokaryotes, the sequences of genes, often disagree and can seldom be proven to agree. Hierarchical structure can always be imposed on or extracted from such data sets by algorithms designed to do so, but at its base the universal TOL rests on an unproven assumption about pattern that, given what we know about process, is unlikely to be broadly true”

That paper does discuss hierarchy at length, but there’s nothing in it that indicates its findings falsify common descent.  The article essentially makes the same points I made above when I explained the difference between an subjective nested hierarchy and an objective nested hierarchy in reference to the hierarchy of sports equipment.   This paper actually supports common descent.

CONCLUSION:

As a scientific theory, UCD is tested because that is what we’re supposed to do in science. We’re supposed to test theories. Of course UCD is going to be tested. Of course UCD is going to be challenged. Of course UCD is going to have some serious issues that are researched, analyzed, and discussed in the scientific literature. But, that doesn’t mean that UCD was falsified.

This information should not alarm anyone who favors the scientific theory of intelligent design (ID).  ID scientists like Michael Behe accept common descent. I have no problem with it, and it really doesn’t have much bearing on ID one way or the other. Since the paleontologists, taxonomists, and molecular biologists who specialize in studying phylogenies accept univ. common descent as being confirmed, and not falsified, I have very little difficulty concurring. That doesn’t mean I am not aware of some of the weaknesses with the conjecture of common descent.

Posted in Uncategorized | 1 Comment

ARTIFICIAL INTERVENTION

Intelligent Design is defined by the Discovery Institute as:

“THE THEORY OF INTELLIGENT DESIGN HOLDS THAT CERTAIN FEATURES OF THE UNIVERSE AND OF LIVING THINGS ARE BEST EXPLAINED BY AN INTELLIGENT CAUSE, NOT AN UNDIRECTED PROCESS SUCH AS NATURAL SELECTION” (http://www.intelligentdesign.org/).

The classic definition of ID Theory employs the term, “intelligent cause.” Upon studying William Dembski’s work, which defines the ID Theory understanding of “intelligent cause” using Information Theory and mathematical theorems, I rephrased the term “intelligent cause” to be “artificial intervention,” and have written extensively on the subject for why it’s a better term.

Both terms are synonymous, however phrasing the term the way I do helps the reader to more readily understand the theory of intelligent design in the context of scientific reasoning.  In his book, “The Design Inference” (1998), Dembski shows how design = specified complexity = complex specified information.  In “No Free Lunch” (2002), he expands upon the role of “intelligence.”

The idea of “intelligence” is nothing much other than the default word to mean something that is other than a product of known natural processes.  Design theorists predict there are yet additional discoveries to be made of mechanisms for design that supplement evolution and work in conjunction with evolution.  Another term to mean just the opposite of natural selection is artificial selection.

There are two kinds of selection, natural selection and artificial selection.

Selection_classification_diagram

Charles Darwin, famous for his book, “Origin of Species,” wrote about the difference between natural selection and artificial selection in other literature he wrote on dog breeding.  Charles Darwin coined the term, “natural selection.” Darwin observed dog breeding. He recognized that dog breeders carefully selected dogs with certain traits to mate with certain others to enhance favorable characteristics for purposes of winning dog shows. Darwin also wrote a book 13 years after Origin of Species entitled, “The Expression of the Emotions in Man and Animals.” The illustrations he used of dogs can be viewed here.

I wrote an essay about Darwin’s observations concerning dog breeding here.   Essentially, artificial selection = intelligence in that the terms are interchangeable in the context of ID Theory. I didn’t want to use either term in the definition of ID, so I chose a phrase that carries with it the identical meaning, “artificial intervention.”

Artificial intervention contrasts natural selection.  The inspiration that led to Charles Darwin coining the term, “natural selection” was when he observed dog breeding.  Darwin saw how dog breeders select specific dogs to mate in order to enhance the most favorable characteristics to win dog shows.  This is the moment when he realized that what happens in the wild is a selection process that is entirely natural without involvement of any other kind of discretion factored in as a variable.

The moment any artificial action interrupts or interferes with natural processes, then natural processes have been corrupted. ID Theory holds that an information leak, which we call CSI, entered in the development of the original cell via some artificial source. It could be panspermia; quantum particles, quantum biology, natural genetic engineering (NGE), or other conjectures.  This is ID Theory by definition of ID Theory. All processes remain natural as before, except an artificial intervention took place, which could have been a one-time event (the front-loading conjecture) or is ongoing (e.g., NGE).

Panspermia

Panspermia is an example of artificial intervention.

One example of artificial intervention would be panspermia.  The reason why is because the Earth’s biosphere is a closed system.  The concept of abiogenesis is based upon life originating on Earth.  The famous Stanley Miller and Harold Urey experiments attempted to replicate the conditions of the primordial world believed to be on Earth.  With abiogenesis, it is a conjecture to explain how life naturally arose from non-life on Earth, assuming such an event ever occurred on this planet.

Panspermia, on the other hand, is an artificial intervention that transports life from a different source to earth.  While panspermia does not necessarily reflect intelligence, it is still intelligent-like in that an intelligent agent might consider colonizing planet earth by transporting life to our planet from a different location in the universe.

I have been challenged much on this reasoning with the objection being that artificial selection was understood by Darwin to be that of human intelligence.  I can provide many arguments that would indicate there are perfectly acceptable natural mechanisms, entirely non-Darwinian, that due to the fact that they are independent of natural selection, therefore they have to be “artificial selection” by default even if not the product of human intelligence. A good example would be an extraterrestrial intervention.  So, this objection doesn’t concern me.

The objection that does concern me is when someone confuses the ID understanding of “intelligence” to be non-natural. This is where I agree with Richard Dawkins when he writes the “intelligence” of ID Theory is likely entirely illusory (http://www.naturalhistorymag.com/htmlsite/1105/1105_feature1_lowres.html).

This is yet another reason I prefer the term artificial intervention because it leaves room for the conventional understanding of intelligence, and yet remains open to other natural mechanisms that remain to be discovered, and sets these in contrast to already existing known natural processes that are essentially Darwinian. The term “Darwinian” of course means development by means of gradual, step-by-step, successive successive modifications one small change at a time.

“Artificial Intervention” is a term I came up with four years ago to essentially be meant as a synonym to the Intelligent Design phrase, “Intelligent Cause.” When challenging the theory under critical scrutiny, ID is often ridiculed because opponents demand evidence of actual intelligence. This request misses the point.

The idea of intelligent design is not restrained to requiring actual intelligence to be behind other processes that achieve biological specified complexity independent of natural selection. Just the fact that such processes exist confirm ID Theory by definition of the theory. ID proponents expect there to be a cognitive guidance that takes place. And, that appears to be very well the case. But, the intelligence could be illusory.

Whether actual intelligence or simulated, the fact that there are other processes that defy development via gradual Darwinian step-by-step successive modifications confirms the very underlying prediction that defines the theory of Intelligent Design.

I wrote this essay to explain why intelligence does not have to be actual intelligence. Any selection that is not natural selection is artificial selection, which is based upon intelligence, and therefore Intelligent Design. However, the point is moot because William Dembski already showed using his No Free Lunch theorems that specified complexity requires intelligence. Nevertheless, this essay is an explanation to those critics who are not satisfied that ID proponents deliver when asked to provide evidence of an “intelligent cause.”

The term, “artificial intervention” is not necessary in order to define the scientific theory of intelligent design.  However, I believe it is quite useful to expand upon a deeper and more meaningful way of conveying “intelligent cause” without compromising scientific reasoning.

Posted in Uncategorized | Leave a comment

The Wedge Document

The Wedge is more than 15 years old, and was written by a man who is long retired from the ID community. What one man’s motives were are irrelevant to the science of ID Theory. Phillip Johnson is the one who came up with the Wedge document, and it is nothing other than a summary of personal motives, which have nothing directly to do with science. Johnson is 71 years old.  Johnson’s views do not reflect the younger generation of Intelligent Design (ID) Theory advocates who are partial to approaching biology from a design perspective.

Philip Johnson

Philip Johnson is the original author of the Wedge Document

Some might raise the Wedge document as evidence that there has been an ulterior motive. The Discovery Institute has a response to this as well:

The motives of Phillip Johnson are not shared by myself or other ID advocates, and do not reflect the views or position of the ID community or the Discovery Institute. This point would be similar to someone criticizing evolutionary theory because Richard Dawkins would have a biased approach to science in the fact that he is an atheist and political activist.

I. THE WEDGE AND POLITICAL VIEWS OF THE DISCOVERY INSTITUTE ARE A SOCIAL AND IDEOLOGICAL ARGUMENT IRRELEVANT TO THE SCIENTIFIC METHOD.

Some critics would contend the following:

“With regards to how this is relevant, one part of the Discovery Institute’s strategy is the slogan ‘teach the controversy.’  This slogan deliberately tries to make opponents look like they are against teaching ‘all’ of science to students.”

How can such an appeal be objectionable? This is a meaningless point of contention. I don’t know whether the slogan, “teach the controversy” does indeed “deliberately” try “to make opponents look like they are against teaching ‘all’ of science to students.” That should not be the issue.

My position is this:

1. The slogan is harmless, and should be the motto of any individual or group interested in education and advancement of science. This should be a universally accepted ideal.

2. I fully believe and am entirely convinced that the mainstream scientific community does indeed adhere to censorship, and present a one-sided and therefore distorted portrayal of the facts and empirical data.

The fact remains that Intelligent Design is a scientifically fit theory that is about information, not designers.  ID is largely based upon the work of William Dembski, in which he introduced the concept of Complex Specified Information in 1998.  In 1996, biochemist Michael Behe championed the ID-inspired hypothesis of irreducible complexity.  It’s been 17 years since Behe made the predictions of irreducible complexity in his book, “Darwin’s Black Box,” and to this day the four proposed systems to be irreducibly complex have not yet been falsified after thorough examination by molecular biologists.  Those four biochemical systems are the blood-clotting cascade, bacterial flagellum, immune system, and the cilium.

The Wedge2

II. EXCEPT QUOTATIONS OF THE WEDGE ARE ALSO IRRELEVANT BECAUSE THE DISCOVERY INSTITUTE HAS ALREADY PROVIDED THEIR UPDATED REVISION OF THE DOC.

Please keep in mind that my initial concerns about complaints concerning the Wedge document are primarily based upon relevance.  The Discovery Institute repealed and amended the Wedge.  Additionally, the Discovery Institute added extra commentary to clarify their present position.  It’s interesting when I am presented links to the Wedge Document, it is often the updated revised draft.  This being so, then it makes it questionable as to why critics continue quoting from the former outdated and obsolete version.  It is quite a comical obsolete argument that goes against the complainant’s credibility.  In fact, it’s an exercise of the same intellectual dishonesty that the ID antagonists is accusing of the Discovery Institute.

If one desires to criticize the views of the Discovery Institute, then such a person must use the materials that they claim are the actual present position held by the Discovery Institute and ID proponents.  I would further add:

1. ID proponents repudiate the Wedge, and distance themselves from it.

2. Mr. Johnson who authored the Wedge is retired, and that the document is obsolete.

Much about Intelligent Design theory has nothing to do with ideology or religion, such as when ID is demonstrated as an applied science “Intelligent Design” is simply just another word for Bio-Design.  Aside from biomimicry and biomimetics, other areas of science overlap into the definition of ID Theory, such as Natural Genetic Engineering, quantum biology, bioinformatics, bio-inspired nanotechnology, selective breeding, biotechnology, genetic engineering, synthetic biology, bionics, prosthetic implants, to name a few. 

The Wedge
III. THOSE WHO RELY UPON THE WEDGE AND MOVIE EXPELLED ARGUMENTS AGAINST THE MOTIVES OF THE DISCOVERY INSTITUTE FAIL TO MEET THE RELEVANCE REQUIREMENT.

ID antagonists claim:

“The very conception of ‘Intelligent Design’ entails just how ‘secular’ and ‘scientific’ the group tried to make their ‘theory’ sound.  It was created with Christian intentions in mind.”

This is circular reasoning, which is a logic fallacy.  The idea just restates the opening thesis argument as the conclusion, and does nothing to support the conclusion.  It also does not overcome the relevance issue as to the views maintained by the Discovery Institute and ID advocates today.

There is no evidence offered by those who raise the Wedge complaint to connect a religious or ideological motive to ID advocacy. ID Theory must be provided the same opportunity to make predictions, and test a repeatable and falsifiable design-inspired hypothesis.  If anyone has a problem with this, then they own the burden of proof to show why ID scientists are disqualified from performing the scientific method.  In other words, to reject ID on the sole basis of the Wedge document is essentially unjustifiable discrimination based upon a difference of opinion of ideological views.   At the end of the day, the only way to falsify a scientific falsifiable scientific hypothesis is to run the experimentation, and use the empirical data to confirm a claim.

Intelligent Design can be expressed as a scientific theory.  Valid scientific predictions can be premised based upon a pro ID-inspired conjecture.  The issue is whether or not ID actually conforms to the scientific method. If it does, then the objection by ID opponents is without merit and irrelevant. If ID fails in scientific reasoning, then critics simply need to demonstrate that, and then they will be vindicated.  Otherwise, ID Theory remains a perfectly valid testable and falsifiable proposition regardless of its social issues.

So far, ID critics have not made any attempt to offer one solitary scientific argument or employ scientific reasoning as to the basis of ID Theory.

Posted in Uncategorized | Tagged , | Leave a comment

DOES EVOLUTION ALONE INCREASE INFORMATION IN A GENOME?

This is in response to the video entitled, “Evolution CAN Increase Information (Classroom Edition).”

I agree with the basic presentation of Shannon’s work in the video, along with its evaluation of Information Theory, the Information Theory definition of “information,” bits, noise, and redundancy.  I also accept the fact that new genes evolve, as described in the video. So far, so good.I have some objections to the video, including the underlying premise, which I consider to be a strawman.

To illustrate quantifying information into bits, Shannon referenced an attempt to receive a one-way radio/telephone transmission signal.

Before I outline my dissent, here’s what I think the problem is. This is likely the result of creationists hijacking work done by ID scientists, in this case William Dembski, and arguing against evolution using flawed reasoning that misrepresents ID scientists. I have no doubt that there are creationists who could benefit by watching this video and learn how they were mistaken in raising the argument the narrative in the video refutes. But, that flawed argument misinterprets Dembski’s writings.

ID Theory is grounded upon Dembski’s development in the field of informatics, based upon Shannon’s work. Dembski took Shannon Information further, and applied mathematical theorems to develop a special and unique concept of information called COMPLEX SPECIFIED INFORMATION (CSI), aka “Specified Information.” I have written about CSI in several blog articles, but this one is my most thorough discussion on CSI.

I often am guilty myself of describing the weakness of evolutionary theory to be based upon the inability to increase information. In fact, my exact line that I have probably said a hundred times over the last few years goes like this:

“Unlike evolution, which explains diversity and inheritance, ID Theory best explains complexity, and how information increases in the genome of a population leading to greater specified complexity.”

I agree with the author of this video script that my general statement is so overly broad that it is vague, and easily refuted because of specific instances when new genes evolve. Of course, of those examples, Nylonase is certainly an impressive adaptation to say the least.

But, I don’t stop there at my general comment to rest my case. I am ready to continue by clarifying what I mean when I talk about “information” in the context of ID Theory. The kind of “information” we are interested is CSI, which is both complex and specified. Now, there are many instances where biological complexity is specified, but Dembski was not ready to label these “design” until the improbability reaches the Universal Probability Bound of 1 x 10^–150. Such an event is unlikely to occur by chance. This is all in Dembski’s book, “The Design Inference” (1998).

According to ID scientists, CSI occurs early, in that it’s in the very molecular machinery required to comprise the first reproducing cell already in existed when life originated. The first cell already has its own genome, its own genes, and enough bits of information up front as a given for frameshift, deletion, insertion, and duplication types of mutations to occur. The information, noise, and redundancy required to make it possible for there to be mutations is part of the initial setup.

Dembski has long argued, which is essentially the crux of the No Free Lunch theorems, that neither evolution or genetic algorithms produce CSI.  Evolution only smuggles CSI forward. Evolution is the mechanism that includes the very mutations and process to increase the information as demonstrated in the video. But, according to ID scientists, the DNA, genes, start-up information, reproduction system, RNA replication, transcription, and protein folding equipment were there from the very start, and that the bits and materials required in order for the mutations to occur were front-loaded in advance. Evolution only carries it forward into fruition in the phenotype.  I discuss Dembski’s No Free Lunch more fully here.

DNA binary

Dembski wrote:

“Consider a spy who needs to determine the intentions of an enemy—whether that enemy intends to go to war or preserve the peace. The spy agrees with headquarters about what signal will indicate war and what signal will indicate peace. Let’s imagine that the spy will send headquarters a radio transmission and that each transmission takes the form of a bit string (i.e., a sequence of 0s and 1s ). The spy and headquarters might therefore agree that 0 means war and 1 means peace. But because noise along the communication channel might flip a 0 to a 1 and vice versa, it might be good to have some redundancy in the transmission. Thus the spy and headquarter s might agree that 000 represents war and 111 peace and that anything else will be regarded as a garbled transmission. Or perhaps they will agree to let 0 represent a dot and 1 a dash and let the spy communicate via Morse code in plain English whether the enemy plans to go to war or maintain peace.

“This example illustrates how information, in the sense of meaning, can remain constant whereas the vehicle for representing and transmitting this information can vary. In ordinary life we are concerned with meaning. If we are at headquarters, we want to know whether we’re going to war or staying at peace. Yet from the vantage of mathematical information theory, the only thing that’s important here is the mathematical properties of the linguistic expressions we use to represent the meaning. If we represent war with 000 as opposed to 0, we require three times as many bits to represent war, and so from the vantage of mathematical information theory we are utilizing three times as much information. The information content of 000 is three bits whereas that of 0 is just one bit.” [Source: Information-Theoretic Design Argument]

My main objection to the script is toward the end where the narrator, Shane Killian, states that if anyone has a different understanding of the definition of information, and prefers to challenge the strict definition that “information” is a reduction in uncertainty, that their rebuttal should be outright dismissed. I personally agree with Shannon, so I don’t have a problem with it, but there are other applications in computer science, bioinformatics, electrical engineering, and a host of other academic disciplines that have their own definitions of information that emphasize different dynamics than Shannon did.

Shannon made huge contributions to these fields, but his one-way radio/telephone transmission analogy is not the only way to understand the concept of information.  Shannon discusses these concepts in his 1948 paper on Information Theory.  Moreover, even though that Shannon’s work was the basis of Dembski’s work, ID Theory relates to the complexity and specificity of information, not just in quantification of “information” alone per se.

Claude Shannon is credited as the father and discoverer of Information Theory.

Posted in COMPLEX SPECIFIED INFORMATION (CSI), INFORMATION THEORY | Tagged , , , | Leave a comment

MICHAEL BEHE ON THE WITNESS STAND

As most people are aware, Michael Behe championed the design-inspired ID Theory hypothesis of Irreducible Complexity.  Michael Behe testified as an expert witness in Kitzmiller v. Dover (2005). 

behe-smile

Transcripts of all the testimony and proceedings of the Dover trial are available hereWhile under oath, he testified that his argument was:

“[T]hat the [scientific] literature has no detailed rigorous explanations for how complex biochemical systems could arise by a random mutation or natural selection.”

Behe was specifically referencing origin of life, molecular and cellular machinery. The cases in point were specifically the bacterial flagellum, cilia, blood-clotting cascade, and the immune system because that’s what Behe wrote about in his book, “Darwin’s Black Box” (1996).

The attorneys piled up a stack of publications regarding the evolution of the immune system just in front of Behe on the witness stand while he was under oath. Behe is criticized by anti-ID antagonists for dismissing the books.

Michael Behe testifies as an expert witness in Kitzmiller v. Dover. Illustration is by Steve Brodner, published in The New Yorker on Dec. 5, 2005.

The books were essentially how the immune system developed in vertebrates.  But, that isn’t what Intelligent Design theory is based upon. ID Theory is based upon complexity appearing at the outset of life when life first arose, and the complexity that appears during the Cambrian Explosion.

The biochemical structures Behe predicted to be irreducibly complex (bacterial flagellum, cilium, blood-clotting, and immune system) arose during the development of the first cell.  These biochemical systems occur at the molecular level in unicellular eukarya organisms, as evidenced by the fact that retroviruses are in the DNA of these most primitive life forms.  They are complex, highly conserved, and are irreducibly complex.  You can stack a mountain of books and scientific literature on top of this in re how these biochemical systems morphed from that juncture and forward into time, but that has nothing to do with the irreducible complexity of the original molecular machinery. 

The issue regarding irreducible complexity is the source of the original information that produced the irreducibly complex system in the first place.  The scientific literature on the immune system only addresses changes in the immune system after the system already existed and was in place.  For example, the Type III Secretion System Injector (T3SS) is often used to refute the irreducible complexity of flagellar bacteria.  But, the T3SS is not an evolutionary precursor of a bacteria flagella; it was derived subsequently and is evidence of a decrease in information.

The examining attorney, Eric Rothschild, stacked up those books one on top the other for courtroom theatrics.

Behe testified:

“These articles are excellent articles I assume. However, they do not address the question that I am posing. So it’s not that they aren’t good enough. It’s simply that they are addressed to a different subject.”

Those who reject ID Theory and dislike Michael Behe emphasize that since Behe is the one making the claim that the immune system is Irreducibly Complex, then Behe owns the burden to maintain a level of knowledge as what other scientists write on the subject.  It should be noted that there indeed has been a wealth of research on the immune system and the collective whole of the papers published gives us a picture of how the immune system evolved. But, the point that Behe made was there is very little knowledge available, if any, as to how the immune system first arose.

The burden was on the ACLU attorneys representing Kitzmiller to cure the defects of foundation and relevance. But, they never did. But, somehow anti-ID antagonists spin this around to make it look like somehow Behe was in the wrong here, which is entirely unfounded.  Michael Behe responded to the Dover opinion written by John E. Jones III hereOne comment in particular Behe had to say is this:

“I said in my testimony that the studies may have been fine as far as they went, but that they certainly did not present detailed, rigorous explanations for the evolution of the immune system by random mutation and natural selection — if they had, that knowledge would be reflected in more recent studies that I had had a chance to read.

In a live PowerPoint presentation, Behe had additional comments to make about how the opinion of judge John E. Jones III was not authored by the judge at all, but by an ACLU attorney.  You can see that lecture here.

Immunology

Piling up a stack of books in front of a witness without notice or providing a chance to review the literature before they can provide an educated comment has no value other than courtroom theatrics.

The subject was clear that the issue was biological complexity appearing suddenly at the dawn of life. Behe had no burden to go on a fishing expedition through that material. It was up to the examining attorney to direct Behe’s attention to the specific topic and ask direct questions. But, the attorney never did that.

One of the members on the opposition for Kitzmiller is Nicholas Matzke, who is employed by the NCSEThe NCSE was originally called upon early by the Kitzmiller plaintiffs, and later the ACLU retained to represent Kitzmiller.  Nick Matzke had been handling the evolution curriculum conflict at Dover as early as the summer of 2004.  Matzke tells the story as to how he worked with Barbara Forrest, on the history of ID, and with Kenneth Miller, their anti-Behe expert.  Matzke writes,

“Eric Rothschild and I knew that defense expert Michael Behe was the scientific centerpoint of the whole case — if Behe was found to be credible, then the defense had at least a chance of prevailing. But if we could debunk Behe and the “irreducible complexity” argument — the best argument that ID had — then the defense’s positive case would be sunk.”

Matzke offered guidance on the deposition questions for Michael Behe and Scott Minnich, and was present when Behe and Minnich were deposed.  When Eric Rothschild, the attorney who cross-examined Behe in the trial, flew out to Berkeley for Kevin Padian’s deposition, the NCSE discussed with Rothschild how to deal with Behe.  Matzke describes their strategy:

“One key result was convincing Rothschild that Behe’s biggest weakness was the evolution of the immune system. This developed into the “immune system episode” of the Behe cross-examination at trial, where we stacked up books and articles on the evolution of the immune system on Behe’s witness stand, and he dismissed them all with a wave of his hand.”

It should be noted that as detailed and involved as the topic on the evolution of the vertebrate immune system is, the fact remains that to this day Michael Behe’s 1996 prediction that the immune system is irreducibly complex has not yet been falsified even though it is very much falsifiable.  I had the opportunity to personally debate Nick Matzke on this very issue myself.  The Facebook thread in which this discussion took place is here, in the ID group called Intelligent Design – Official Page.

Again, to repeat the point I made above in regarding the courtroom theatrics with the stacking of the pile of books in front of Behe, the burden was not on Behe to sift through the material to find evidence that would support Kitzmiller. It is up to the ACLU attorneys to direct Behe’s attention in those books and publications where complex biochemical life and the immune system first arose, and then ask questions specifically to that topic. But, since Behe was correct in that the material was not responsive to the issue in the examination, there was nothing left for the attorneys to do except engage in theatrics.

There is also a related Facebook discussion thread regarding this topic.

Posted in IRREDUCIBLE COMPLEXITY, KITZMILLER V. DOVER AND LEGAL ISSUES | Tagged , , | 2 Comments

Response to Claim That ID Theory Is An Argument from Incredulity

The Contention That Intelligent Design Theory Succumbs To A Logic Fallacy:

It is argued by those who object to the validity of ID Theory that the proposition of design in nature is an argument from ignorance.   There is no validity to this unfounded claim because design in nature is well-established by the work of William Dembski.  For example, here is a database of writings of Dembski: http://designinference.com/dembski-on-intelligent-design/dembski-writings/. Not only are the writings of Dembski peer-reviewed and published, but so are rebuttals that were written in response of his work.  Dembski is the person who coined the phrase Complex Specified Information, and how it is convincing evidence for design in nature.

Informal Fallacy

The Alleged Gap Argument Problem With Irreducible Complexity:

The argument from ignorance allegation against ID Theory is based upon the design-inspired hypothesis championed by Michael Behe, which is known as Irreducible Complexity. It is erroneous to claim ID is based upon an argument from incredulity* because ID Theory makes no appeals to the unobservable, supernatural, paranormal, or anything that is metaphysical or outside the scope of science.  However, the assertion that the Irreducible Complexity hypothesis is a gap argument is a valid objection that does need a closer view to determine if the criticism of irreducible complexity is valid.

An irreducibly complex system is one that (a) the removal of a protein renders the molecular machine inoperable, and (b) the biochemical structure has no stepwise evolutionary pathway.

Here’s how one would set up examination by using gene knockout, reverse engineering, study of homology, and genome sequencing:

I. To CONFIRM Irreducible Complexity:

Show:

1. The molecular machine fails to operate upon the removal of a protein.

AND,

2. The biochemical structure has no evolutionary precursor.

II. To FALSIFY Irreducible Complexity:

Show:

1. The molecular machine still functions upon loss of a protein.

OR,

2. The biochemical structure DOES have an evolutionary pathway.

The 2 qualifiers make falsification easier, and confirmation more difficult.

Those who object to irreducible complexity often raise the argument that the irreducible complexity hypothesis is based upon there being gaps or negative evidence.   Such critics claim that irreducible complexity is not based upon affirmative evidence, but on a lack of evidence, and as such, irreducible complexity is a gap argument, also known as an argument from ignorance.  However, this assertion that irreducible complexity is nothing other than a gap argument is false.

According to the definition of irreducible complexity, the hypothesis can be falsified EITHER way, by (a) demonstrating the biochemical system still performs its original function upon the removal of any gene that makes up its parts, or (b) showing that there are missing mutations that were skipped, i.e., there is no stepwise evolutionary pathway or precursor.  Irreducible complexity can still be falsified even if no evolutionary precursor is found because of the functionality qualifier.   In other words, the mere fact that there is no stepwise evolutionary pathway does not automatically mean that the system is irreducibly complex.  To confirm irreducible complexity, BOTH qualifiers must be satisfied.  But, it only takes one of the qualifiers to falsify irreducible complexity.  As such, the claim that irreducible complexity is fatally tied to a gap argument is without merit.

It is true that there very much exists a legitimate logic fallacy known as proving a negative.  The question is whether there is such a thing as proving nonexistence. It’s a logic fallacy. While it is true that it is impossible to prove a negative or provide negative proof, it is very much logically valid to limit a search to find a target within a reasonable search space and obtain a quantity of zero as a scientifically valid answer.

Solving a logic problem might be a challenged, but there is a methodical procedure that will lead to success.  The cure to the logic fallacy, is to correct the error and solve the problem.

Solving a logic problem might be a challenge, but there is a methodical procedure that will lead to success. The cure to a logic fallacy, is to simply correct the error and solve the problem.

The reason why the irreducible complexity hypothesis is logically valid is because there is no attempt to base the prediction that certain biochemical molecular machinery are irreducibly complex based upon absence of evidenceIf this were so, then the critics would be correct.  But, this is not the case.  Instead, the irreducible complexity hypothesis requires research, such as various procedures in molecular biology as (a) gene knockout, (b) reverse engineering, (c) examining homologous systems, and (d) sequencing the genome of the biochemical structure.  The gene knockout procedure was used by Scott Minnich in 2004-2005 to show that the removal of any of the proteins of a bacterial flagellum will render that bacteria incapable of motility (can’t swim anymore).  Michael Behe also mentions (e) yet another way as to how testing irreducible complexity using gene knockout procedure might falsify the hypothesis here.

When the hypothesis of irreducible complexity is tested in the lab using any of the procedures directly noted above, an obvious thorough investigation is conducted that demonstrates evidence of absence. There is a huge difference between absence of evidence and evidence of absence.  One is a logic fallacy while the other is an empirically generated result, a scientifically valid quantity that is concluded upon thorough examination.  So, depending upon the analysis, you can prove a negative.

Evidence of Absence

Here’s an excellent example as to why irreducible complexity is logically valid, and not an argument from ignorance.  If I were to ask you if you had change for a dollar, you could say, “Sorry, I don’t have any change.” If you make a diligent search in your pockets to discover there are indeed no coins anywhere to be found on your person, then you have affirmatively proven a negative that your pockets were empty of any loose change. Confirming that you had no change in your pockets was not an argument from ignorance because you conducted a thorough examination and found it to be an affirmatively true statement.

The term, irreducible complexity, was coined by Michael Behe in his book, “Darwin’s Black Box” (1996).  In that book, Behe predicted that certain biochemical systems would be found to be irreducibly complex.  Those specific systems were (a) the bacterial flagellum, (b) cilium, (c) blood-clotting cascade, and (d) immune system.   It’s now 2013 at the time of writing this essay.  For 17 years, the research has been conducted, and the flagellum has been shown to be irreducibly complex. It’s been thoroughly researched, reverse engineered, and its genome sequenced. It is a scientific fact that the flagellum has no precursor. That’s not a guess. It is not stated as ignorance from taking some wild uneducated guess. It’s not a tossing one’s hands up in the air saying, “I give up.” It is a scientific conclusion based upon thorough examination.

Logic Fallacies

Logic fallacies, such as circular reasoning, argument from ignorance, red herring, strawman argument, special pleading, and others are based upon philosophy and rhetoric. While they might lend to the merit of a scientific conclusion, it is up to the peer-review process to determine the validity of a scientific hypothesis.

Again, if you were asked how much change do you have in your pockets. You can put your hand in your pocket, look to see how many coins are there. If there is no loose change, it is NOT an argument from ignorance to state, “Sorry, I don’t have any spare change.” You didn’t guess. You stuck your hands in your pockets and looked, and scientifically deduced the quantity to be zero. The same is true with irreducible complexity. After the search has taken place, the prediction the biochemical system is irreducibly complex is upheld and verified. Hence, there is no argument from ignorance.

The accusation that irreducible complexity is an argument from ignorance essentially suggests a surrender and abandonment of ever attempting to empirically determine whether the prediction is scientifically correct.  It’s absurd for anyone to suggest that ID scientists are not interested in finding Darwinian mechanisms responsible for the evolution of an irreducible complex biochemical structure. If you lost money in your wallet, it would be ridiculous for someone to accuse you of rejecting any interest in recovering your money. That’s essentially what is being claimed when someone draws the argument from ignorance accusation. The fact is you know you did look (you might have turned your house upside down looking), and know for a fact that the money is missing. It doesn’t mean that you might still find it (the premise is still falsifiable). But, a thorough examination took place, and you determined the money is gone.

Consider Mysterious Roving Rocks:

On a sun-scorched plateau known as Racetrack Playa in Death Valley, California, rocks of all sizes glide across the desert floor.  Some of the rocks accompany each other in pairs, which creates parallel trails even when turning corners so that the tracks left behind resemble those of an automobile.  Other rocks travel solo the distance of hundreds of meters back and forth along the same track.  Sometimes these paths lead to its stone vehicle, while other trails lead to nowhere, as the marking instrument has vanished.

Roving Rocks

Some of these rocks weigh several hundred pounds. That makes the question: “How do they move?” a very challenging one.  The truth is no one knows just exactly how these rocks move.   No one has ever seen them in motion.  So, how is this phenomenon explained?

A few people have reported seeing Racetrack Playa covered by a thin layer of ice. One idea is that water freezes around the rocks and then wind, blowing across the top of the ice, drags the ice sheet with its embedded rocks across the surface of the playa.  Some researchers have found highly congruent trails on multiple rocks that strongly support this movement theory.  Other suggest wind to be the energy source behind the movement of the roving rocks.

The point is that anyone’s guess, prediction, speculation is as good as that of anyone else.  All these predictions are testable and falsifiable by simply setting up instrumentation to monitor the movements of the rocks.  Are any of these predictions an argument from ignorance?  No.  As long as the inquisitive examiner makes an effort to determine the answer, this is a perfectly valid scientific endeavor. 

The argument from ignorance would only apply when someone gives up, and just draws a conclusion without any further attempt to gain empirical data.  It is not a logic fallacy in and of itself on the sole basis that there is a gap of knowledge as to how the rocks moved from Point A to Point B.  The only logic fallacy would be to draw a conclusion while resisting further examination.  Such is not the case with irreducible complexity.  The hypothesis has endured 17 years of laboratory research by molecular biologists, and the research continues to this very day.

The Logic Fallacy Has No Bearing On Falsifiability:

Here’s yet another example as to why irreducible complexity is scientifically falsifiable, and therefore not an argument from ignorance logic fallacy.  If someone was correct in asserting the argument from incredulity fallacy, then they have eliminated all science. Newton’s law of gravity was an argument from ignorance because he didn’t know anything more than what he had discovered. It was later falsified by Einstein. So, according to this flawed logic, Einstein’s theory of relativity is an argument from ignorance because there might be someone in the future who will falsify it with a Theory of Everything.

Whether a hypothesis passes the Argument of Ignorance logic criterion, or not, the argument is an entirely philosophical one, much like how a mathematical argument might be asserted.  If the argument from ignorance were applied in peer-review to all science papers submitted for publication, the science journals would be near empty of any documents to reference.  Science is not based upon philosophical objections and arguments.  Science is based upon the definition of science, which is observation, falsifiable hypothesis, experimentation, results and conclusion. It is the fact that these methodical elements are in place which makes science based upon what it is supposed to be, and that is empiricism.

Scientific Method

Whether a scientific hypothesis is falsifiable is not affected by philosophical arguments based upon logic fallacies.   Irreducible Complexity is very much falsifiable based upon its definition.  The argument from ignorance only attacks the significance of the results and conclusion of research in irreducible complexity; it doesn’t deter irreducible complexity from being falsifiable.  In fact, the argument from ignorance objection actually emphasizes just the opposite, that irreducible complexity might be falsified tomorrow because it inherently argues the optimism that its just a matter of time that an evolutionary pathway will be discovered in future research.  This is not a bad thing; the fact that irreducible complexity is falsifiable is a good thing.  That testability and obtainable goalpost is what you want in a scientific hypothesis.

ID Theory Is Much More Than Just The One Hypothesis of Irreducible Complexity:

ID Theory is also an applied science as well, click here for examples in biomimicry.  Intelligent Design is also an applied science in areas of bioengineering, nanotechnology, selective breeding, and bioinformatics, to name a few applications.  ID Theory is a study of information and design in nature.  And, there are design-inspired conjectures as to where the source of information originates, such as the rapidly growing field of quantum biology, Natural Genetic Engineering, and front-loading via panspermia.

In conclusion, the prediction that there are certain biochemical systems that exist of which are irreducibly complex is not a gaps argument.  The definition of irreducible complexity is stated above, and it is very much a testable, repeatable, and falsifiable hypothesis.  It is a prediction that certain molecular machinery will not operate upon the removal of a part, and have no stepwise evolutionary precursor.  This was predicted by Behe 17 years ago, and still remains true, as evidenced by the bacteria flagellum, as an example.

*  Even though these two are technically distinguishable logic fallacies, the argument from incredulity is so similar to the argument from ignorance that for purposes of discussion I treat the terms as synonymous.

Posted in LOGIC FALLACIES | Tagged , , , , | Leave a comment

RESPONSE TO THE MARK PERAKH CRITIQUE, “THERE IS A FREE LUNCH AFTER ALL: WILLIAM DEMBSKI’S WRONG ANSWERS TO IRRELEVANT QUESTIONS”

I. INTRODUCTION

This essay is a reply to chapter 11 of the book authored by Mark Perakh entitled, Why Intelligent Design Fails: A Scientific Critique of the New Creationism (2004).  The chapter can be review here.  Chapter 11, “There is a Free Lunch After All: William Dembski’s Wrong Answers to Irrelevant Questions,” is a rebuttal to the book written by William Dembski entitled, No Free Lunch (2002).  Mark Perakh’s also authored another anti-ID book, “Unintelligent Design.”  The Discovery Institute replied to Perakh’s work here.

The book by William Dembski, No Free Lunch (2002) is a sequel to his classic, The Design Inference (1998). The Design Inference used mathematical theorems to define design in terms of a chance and statistical improbability.  In The Design Inference, Dembski explains complexity, and demonstrated that when complex information is specified, it determines design.  Simply put, Complex Specified Information (CSI) = design.  It’s CSI that is the technical term that mathematicians, information theorists, and ID scientists can work with to determine whether some phenomenon or complex pattern is designed.

One of the most important contributors to ID Theory is American mathematician Claude Shannon, who is considered to be the father of Information Theory. Essentially, ID Theory is a sub-theory of Information Theory in the field of Bioinformatics. This is one of Dembski’s areas of expertise.

Claude Shannon is seen here with Theseus, his magnetic mouse. The mouse was designed to search through the corridors until it found the target.

Claude Shannon pioneered the foundations for modern Information Theory.  His identifying units of information that can be quantified and applied in fields such as computer science is still called Shannon Information to this day.

Shannon invented a mouse that was programmed to navigate through a maze to search for a target, concepts that are integral to Dembski’s mathematical theorems of which are based upon Information Theory.  Once the mouse solved the maze it could be placed anywhere it had been before and use its prior experience to go directly to the target. If placed in unfamiliar territory, the mouse would continue the search until it reached a known location and then proceed to the target.  The ability of the device to add new knowledge to its memory is believed to be the first occurrence of artificial learning.

In 1950 Shannon published a paper on computer chess entitled Programming a Computer for Playing Chess. It describes how a machine or computer could be made to play a reasonable game of chess. His process for having the computer decide on which move to make is a minimax procedure, based on an evaluation function of a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position. Material was counted according to the usual relative chess piece relative value. (http://en.wikipedia.org/wiki/Claude_Shannon).

Shannon’s work obviously involved applying what he knew at the time for the computer program to scan all possibilities for any given configuration on the chess board to determine the best optimum move to make.  As you will see, this application of a search within any given phase space that might occur during the course of the game for a target, which is one fitness function among many as characterized in computer chess is exactly what the debate is about with Dembski’s No Free Lunch (NFL) Theorems.

When Robert Deyes wrote a review on Stephen Meyer’s “Signature In The Cell,” he noted “When talking about ‘information’ and its relevance to biological design, Intelligent Design theorists have a particular definition in mind.”  Stephen Meyer explained in “Signature In The Cell” that information is: “the attribute inherent in and communicated by alternative sequences or arrangements of something that produce specific effects” (p.86).

When Shannon unveiled his theory for quantifying information it included several axioms, one of which is information is inversely proportional to uncertainty. Similarly, design can be contrasted to chance.

II. COMPLEX SPECIFIED INFORMATION (CSI):

CSI is based upon the theorem:

sp(E) and SP(E)  D(E)

When a small probably (SP) event (E) is complex, and

SP(E) = [P (E|I) < the Universal Probability Bound]. Or, in English, we know an event E is a small probably event when the probability of event E given I is less than the Universal Probability Bound. I = All relevant side information and all stochastic hypotheses. This is all in Dembski’s book, The Design Inference.

An event E is specified by a pattern independent of E, or expressed mathematically: sp(E). Upper case letters SP(E) are the small probability event we are attempting to determine is CSI, or designed. Lower case letters sp(E) are a prediction that we will discover the SP(E). Therefore, if sp(E) and SP(E) then  D(E). D(E) means the event E is not only small probability, but we can conclude it is designed.

Dembski’s Universal Probability Bound = 0.5 x 10-150, or 0.5 times 10 to the exponent negative 150 power. This is the magic number when one can scientifically be justified to invoke design. It’s been said that using Dembski’s formula, the probability that Dembski states must be matched in order to ascribe design is to announce in advance before dealing that you are going to be dealt 24 Royal Flushes in a row, and then the event plays out just exactly as the advance forecast. In other words, just as intelligence might be entirely illusory, so likewise CSI is nothing other than a mathematical ratio that might not have anything in the world to do with actual design.

The probability of dealing a Royal flush given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement is 649,739 to 1. According to Dembski, if someone were to be dealt a Royal Flush 24 times in a row upon the advance announcement predicting such a happening would take place, his contentions would be that it was so improbable that someone cheated, or “design” would have had to been involved.

The probability of being dealt a Royal flush given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement is 649,739 to 1.

I’m oversimplifying CSI just for the sake of making this point that we have already imposed upon the term “design” a technical definition that requires no intelligence or design as we understand the everyday normal use of the words.  What’s important is that just as improbable as it is to be dealt a Royal Flush, so likewise the level of difficulty natural selection is up against to produce what appears to be designed in nature. And, when CSI is observed in nature, which occurs occasionally, then that not only confirms ID predictions, and defies Darwinian gradualism, but also tips a scientist a clue that such might be evidence of additional ID-related mechanisms at work.

It is true that William Dembski’s theorems are based upon an assumption that we can quantify everything in the universe; no argument there. But, he only used that logic to derive his Universal Probability Bound, which is nearly an infinitesimally small number: 0.5 x 10-150, or 0.5 times 10 to the exponent negative 150 power. Do you not think that when a probability is this low that it is a safe bet to invoke corruption of natural processes by an intelligent agency? The number is a useful number.

I wrote two essays on CSI to provide a better understanding of specified complexity introduced in Dembski’s book, The Design Inference.  In this book, Dembski introduces and expands on the meaning of CSI, and then proceeds to present reasoning as to why CSI infers design.  The first essay I wrote on CSI here is an elementary introduction to the overall concept.  I wrote a second essay here that provides a more advances discussion on CSI.

CSI does show up in nature. That’s the whole point of the No Free Lunch Principle is that there is no way by which evolution can take credit for the occurrences when CSI shows up in nature.

III. NO FREE LUNCH

Basically, the book, “No Free Lunch” is a sequel to the earlier work, The Design Inference. While we get more calculations that confirm and verify Dembski’s earlier work, we also get new assertions made by Dembski. It is very important to note that ID Theory is based upon CSI that is established in The Design Inference. The main benefit of the second book, “No Free Lunch,” is that it further validates and verifies CSI, which was established in The Design Inference.  The importance of this fact cannot be overemphasized. Additionally, “No Free Lunch” further confirms the validity of the assertion that design in inseparable from intelligence.

Before “No Free Lunch,” there was little effort demonstrating that CSI is connected to intelligence. That’s a problem because CSI = design. So, if CSI = design, it should be demonstrable that CSI correlates and is directly proportional to intelligence. This is the thesis of what the book, “No Free Lunch” sets out to do. If “No Free Lunch” fails to successfully support the thesis that CSI correlates to intelligence, that would not necessarily impair ID Theory, but if Dembski succeeds, then it would all the more lend credibility to ID Theory and certainly all of Dembski’s work as well.

IV. PERAKH’S ARGUMENT

The outline of Perakh’s critique of Dembski’s No Free Lunch theorems is as follows:

1.    Methinks It Is like a Weasel—Again
2.    Is Specified Complexity Smuggled into Evolutionary Algorithms?
3.    Targetless Evolutionary Algorithms
4.    The No Free Lunch Theorems
5.    The NFL Theorems—Still with No Mathematics
6.    The No Free Lunch Theorems—A Little Mathematics
7.    The Displacement Problem
8.    The Irrelevance of the NFL Theorems
9.    The Displacement “Problem”

1.  METHINKS IT IS LIKE A WEASEL – AGAIN

One common demonstration to help people understand how CSI works is to take a letter sequence. This can be done with anything, but the common example is this pattern:

METHINKS•IT•IS•LIKE•A•WEASEL

This letter arrangement is used the most often to describe CSI because the math has already been worked out. The bullets represent spaces. There are 27 possibilities at each location in a symbol string 28 characters in length. If natural selection were entirely random it would take 1 x 1040 (that’s 10 to the 40th exponent, or 1 with 40 zeroes to get to the decimal point). It’s a small probability. However, natural selection (NS) is smarter than that, and Richard Dawkins has shown how NS achieves similar odds in nature in an impressive 43 attempts to get the answer correct, as Dembski notes here.

In this example, the probability was only 1 x 1040. CSI is an even much more higher number than that. If you take a pattern or model, such as METHINKS•IT•IS•LIKE•A•WEASEL, and you keep adding the information, you soon reach probabilities that are within the domain of CSI.

Dembski’s explanation to the target sequence of METHINKS•IT•IS•LIKE•A•WEASEL is as follows:

“Thus, in place of 1040 tries on average for pure chance to produce the target sequence, by employing the Darwinian mechanism it now takes on average less than 100 tries to produce it. In short, a search effectively impossible for pure chance becomes eminently feasible for the Darwinian mechanism.

“So does Dawkins’s evolutionary algorithm demonstrate the power of the Darwinian mechanism to create biological information? No. Clearly, the algorithm was stacked to produce the outcome Dawkins was after. Indeed, because the algorithm was constantly gauging the degree of difference between the current sequence from the target sequence, the very thing that the algorithm was supposed to create (i.e., the target sequence METHINKS•IT•IS•LIKE•A•WEASEL) was in fact smuggled into the algorithm from the start. The Darwinian mechanism, if it is to possess the power to create biological information, cannot merely veil and then unveil existing information. Rather, it must create novel information from scratch. Clearly, Dawkins’s algorithm does nothing of the sort.

“Ironically, though Dawkins uses a targeted search to illustrate the power of the Darwinian mechanism, he denies that this mechanism, as it operates in biological evolution (and thus outside a computer simulation), constitutes a targeted search. Thus, after giving his METHINKS•IT•IS•LIKE•A•WEASEL illustration, he immediately adds: “Life isn’t like that.  Evolution has no long-term goal. There is no long-distant target, no final perfection to serve as a criterion for selection.” [Footnote] Dawkins here fails to distinguish two equally valid and relevant ways of understanding targets: (i) targets as humanly constructed patterns that we arbitrarily impose on things in light of our needs and interests and (ii) targets as patterns that exist independently of us and therefore regardless of our needs and interests. In other words, targets can be extrinsic (i.e., imposed on things from outside) or intrinsic (i.e., inherent in things as such).

“In the field of evolutionary computing (to which Dawkins’s METHINKS•IT•IS•LIKE•A•WEASEL example belongs), targets are given extrinsically by programmers who attempt to solve problems of their choice and preference. Yet in biology, living forms have come about without our choice or preference. No human has imposed biological targets on nature. But the fact that things can be alive and functional in only certain ways and not in others indicates that nature sets her own targets. The targets of biology, we might say, are “natural kinds” (to borrow a term from philosophy). There are only so many ways that matter can be configured to be alive and, once alive, only so many ways it can be configured to serve different biological functions. Most of the ways open to evolution (chemical as well as biological evolution) are dead ends. Evolution may therefore be characterized as the search for alternative “live ends.” In other words, viability and functionality, by facilitating survival and reproduction, set the targets of evolutionary biology. Evolution, despite Dawkins’s denials, is therefore a targeted search after all.” (http://evoinfo.org/papers/ConsInfo_NoN.pdf).

Weasel Graph

This graph was presented by a blogger who ran just one run of the weasel algorithm for Fitness of “best match” for n = 100 and u = 0.2.

Perakh doesn’t make any argument here, but introduces the METHINKS IT IS LIKE A WEASEL configuration here to be the initial focus of what is to follow.  The only derogatory comment he makes with Dembski is to charge that Dembski is “inconsistent.”  But, there’s no excuse to accuse Dembski of any contradiction. Perakh  states himself, “Evolutionary algorithms may be both targeted and targetless” (Page 2).  He also admits that Dembski was correct in that “Searching for a target IS teleological” (Page 2).  Yet, Perakh blames Dembski to be at fault for simply noting the teleological inference, and falsely accuses Dembski of contradicting himself on this issue when there is no contradiction.  There’s no excuse for Perakh to accuse Dembksi is being inconsistent here when all he did was acknowledge that teleology should be noted and taken into account when discussing the subject.

Perakh also states on page 3 that Dembski lamented over the observation made by Dawkins.  This is unfounded rhetoric and ad hominem that does nothing to support Perakh’s claims.  There is no basis to assert or benefit to gain by suggesting that Dembski was emotionally dismayed because of the observations made by Dawkins.  The issue is a talking point for discussion.

Perakh correctly represents the fact, “While the meaningful sequence METHINKSITISLIKEAWEASEL is both complex and specified, a sequence NDEIRUABFDMOJHRINKE of the same length, which is gibberish, is complex but not specified” (page 4).  And, then he correctly reasons the following,

“If, though, the target sequence is meaningless, then, according to the above quotation from Behe, it possesses no SC. If the target phrase possesses no SC, then obviously no SC had to be “smuggled” into the algorithm.” Hence, if we follow Dembski’s ideas consistently, we have to conclude that the same algorithm “smuggles” SC if the target is meaningful but does not smuggle it if the target is gibberish.” (Emphasis in original, page 4)

Perakh then arrives at the illogical conclusion that such reasoning is “preposterous because algorithms are indifferent to the distinction between meaningful and gibberish targets.”  Perakh is correct that algorithms are indifferent to teleology and making distinctions.  But, he has no basis to criticize Dembski on this point.

Completed Jigsaw Puzzle

This 40-piece jigsaw puzzle is more complex than the Weasel problem that consists of only the letters M, E, T, H, I, N, K, S, L, A, W, S, plus a space.

In the Weasel problem submitted by Richard Dawkins, the solution (target) was provided to the computer up front.  The solution to the puzzle was embedded in the letters provided to the computer to arrange into an intelligible sentence.  The same analogy applies to a jigsaw puzzle.  There is only one end result picture the puzzle pieces can be assembled to achieve.  The information of the picture is embedded in the pieces and not lost from merely cutting the image picture into pieces.  One can still solve the puzzle if they are blinded up front from seeing what the target looks like.   There is only one solution to the Weasel problem, so it is a matter of deduction, and not a blind search as Perakh maintains.   The task the Weasel algorithm had to perform was to unscramble the letters and rearrange them in the correct sequence.

The METHINKS•IT•IS•LIKE•A•WEASEL algorithm was given up front to be the fitness function, and intentionally designed CSI to begin with.  It’s a matter of the definition of specified complexity (SC).  If information is both complex and specified, then it is CSI by definition, and CSI = SC.  They’re two ways to express the same identical concept.  Perakh is correct.  The algorithm has nothing in and of itself to do with the specified complexity of the target phrase.  The reason why a target phrase is specified complexity is because the complex pattern was specified up front to be the target in the first place, all of which was independent of the algorithm.  So, so far, Perakh has not made a point of argument yet.

Dembski makes subsequent comments about the weasel math here and here.

2.  IS SPECIFIED COMPLEXITY SMUGGLED INTO EVOLUTIONARY ALGORITHMS?

Perakh asserts on page 4 that “Dembski’s modified algorithm is as teleological as Dawkins’s original algorithm.”  So what?  This is a pointless red herring that Perakh continues work for no benefit or support of any argument against Dembski.  It’s essentially a non-argument.  All sides: Dembski, Dawkins, and Perakh himself have conceded up front that discussion of this topic is difficult without stumbling over anthropomorphism.  Dembski noted it up front, which is commendable; but somehow Perakh wrongfully tags this to be some fallacy of which Dembski is committing.

Personifying the algorithms to have teleological behavior was a fallacy noted up front.  So, there’s no basis for Perakh to allege that Dembski is somehow misapplying any logic in his discussion.  The point was acknowledged by all participants in the discussion from the very beginning.  Perakh is not inserting anything new here, but just being an annoyance to raise a point that was already noted.  Also, Perakh has yet to actually raise any actual argument yet.

Dembksi wrote in No Free Lunch (194–196) that evolutionary algorithms do not generate CSI, but can only “smuggle” it from a “higher order phase space.”  CSI is also called specified complexity (SC).   Perakh makes the ridiculous claim on page 4 that this point is irrelevant to biological evolution, but offers no reasoning as to why.  To support his challenge against Dembski, Perakh states, “since biological evolution has no long-term target, it requires no injection of SC.”

The question is whether it’s possible a biological algorithm caused the existence of the CSI.  Dembski says yes, and his theorems established in The Design Inference are enough to satisfy the claim.  But, Perakh is arguing here that the genetic algorithm is capable of generating the CSI.  Perakh states that natural selection is unaware of its result (page 4), which is true.  Then he says Dembski must, “offer evidence that extraneous information must be injected into the natural selection algorithm apart from that supplied by the fitness functions that arise naturally in the biosphere.”  Dembski shows this in “Life’s Conservation Law – Why Darwinian Evolution Cannot Create Biological Information.”

3.  TARGETLESS EVOLUTIONARY ALGORITHMS

Biomorphs

Biomorphs

Next, Perakh raises the example made by Richard Dawkins in “The Blind Watchmaker” in which Dawkins uses what he calls “biomorphs” as an argument against artificial selection.  While Dawkins exhibits an imaginative jab to ridicule ID Theory, raising the subject again by Perakh is pointless.  Dawkins used the illustration of biomorphs to contrast the difference between natural selection as opposed to artificial selection upon which ID Theory is based upon.  It’s an excellent example.  I commend Dawkins on coming up with these biomorph algorithms.  They are very unique and original.  You can see color examples of them here.

The biomorphs created by Dawkins are actually different intersecting lines of various degrees of complexity, and resemble Rorschach figures often used by psychologists and psychiatrists.  Biomorphs depict both inanimate objects like a cradle and lamp, plus biological forms such as a scorpion, spider, and bat.   It is an entire departure from evolution as it is impossible to make any logical connection how a fox would evolve into a lunar lander, or how a tree frog would morph into a precision balance scale.  Since the idea is a departure from evolutionary logic of any kind, because no rationale to connect any of the forms is provided, it would be seemingly impossible to devise an algorithm that fits biomorphs.

Essentially, Dawkins used these biomorphs to propose a metaphysical conjecture.  The intent of Dawkins is to suggest ID Theory is a metaphysical contemplation while natural selection is entirely logical reality.  Dawkins explains the point in raising the idea of biomorphs is:

“… when we are prevented from making a journey in reality, the imagination is not a bad substitute. For those, like me, who are not mathematicians, the computer can be a powerful friend to the imagination. Like mathematics, it doesn’t only stretch the imagination. It also disciplines and controls it.”

Biomorphs submitted by Richard Dawkins from The Blind Watchmaker, figure 5 p. 61

This is an excellent point and well-taken. The idea Dawkins had to reference biomorphs in the discussion was brilliant.  Biomorphs are an excellent means to assist in helping someone distinguish the difference between natural selection verses artificial selection.  This is exactly the same point design theorists make when protesting the personification of natural selection to achieve reality-defying accomplishments.  What we can conclude is that scientists, regardless of whether they accept or reject ID Theory, dislike the invention of fiction to fill in unknown gaps of phenomena.

In the case of ID Theory, yes the theory of intelligent design is based upon artificial selection, just as Dawkins notes with his biomorphs.  But, unlike biomorphs and the claim of Dawkins, ID Theory still is based upon fully natural scientific conjectures.

4.  THE NO FREE LUNCH THEOREMS

In this section of the argument, Perakh doesn’t provide an argument.  He’s more interested in talking about his hobby, which is mountain climbing.

The premise offered by Dembski that Perakh seeks to refute is the statement in No Free Lunch, which reads, “The No Free Lunch theorems show that for evolutionary algorithms to output CSI they had first to receive a prior input of CSI.” (No Free Lunch, page 223).  Somehow, Perakh believes he can prove Dembski’s theorems false.  In order to accomplish the task, one would have to analyze Dembski’s theorems.  First of all, Dembski’s theorems take into account all the possible factors and variables that might apply, as opposed to the algorithms only.  Perakh doesn’t make anything close to such an evaluation.  Instead, Perakh does nothing but use the mountain climbing analogy to demonstrate we cannot know just exactly what algorithm natural selection will promote as opposed to which algorithms natural selection will overlook.  This fact is a given up front and not in dispute.  As such, Perakh presents a non-argument here that does nothing to challenge Dembski’s theorems in the slightest trace of a bit.  Perakh doesn’t even discuss the theorems, let alone refute them.

The whole idea here of the No Free Lunch theorems is to demonstrate how CSI is smuggled across many generations, and then shows up visibly in a phenotype of a life form countless generations later.  Many factors must be contemplated in this process including evolutionary algorithms.   Dembksi’s book, No Free Lunch, is about demonstrating how CSI is smuggled through, which is the whole point as to where the book’s name is derived.  If CSI is not manufactured by evolutionary processes, including genetic algorithms, then it had been displaced from the time it was initially front-loaded.  Hence, there’s no free lunch.

Front-Loading could be achieved several ways, one of which is via panspermia.

But, Perakh makes no attempt to discuss the theorems in this section, much less refute Dembski’s work.  I’ll discuss front-loading in the Conclusion.

5.  THE NO FREE LUNCH THEOREMS—STILL WITH NO MATHEMATICS

Perakh finally makes a valid point here.  He highlights a weakness in Dembski’s book that the calculations provided do little to account for an average performance of multiple algorithms in operation at the same time.

Referencing his mountain climbing analogy from the previous section, Perakh explains the fitness function is the height of peaks in a specific mountainous region.  In his example he designates the target of the search to be a specific peak P of height 6,000 meters above sea level.

“In this case the number n of iterations required to reach the predefined height of 6,000 meters may be chosen as the performance measure.  Then algorithm a1 performs better than algorithm a2 if a1 converges on the target in fewer steps than a2. If two algorithms generated the same sample after m iterations, then they would have found the target—peak P—after the same number n of iterations. The first NFL theorem tells us that the average probabilities of reaching peak P in m steps are the same for any two algorithms” (Emphasis in the original, page 10).

Since any two algorithms will have an equal average performance when all possible fitness landscapes are included, then the average number n of iterations required to locate the target is the same for any two algorithms if the averaging is done over all possible mountainous landscapes.

Therefore, Perakh concludes the no free lunch theorems of Dembski do not say anything  about the relative performance of algorithms a2 and a1 on a specific landscape. On a specific landscape, either a2 or a1 may happen to be much better than its competitor.  Perakh goes on to apply the same logic in a targetless context as well.

These points Perakh raises are well taken.  Subsequent to the writing of Perakh’s book in 2004, Dembski ultimately provided the supplemental math to cure these issues in his paper entitled, “Searching Large Spaces: Displacement and the No Free Lunch Regress” (March 2005), which is available for review here.  It should also be noted that Perakh concludes this section of chapter 11 by admitting that the No Free Lunch theorems “are certainly valid for evolutionary algorithms.”  If that is so, then there is no dispute.

6.  THE NO FREE LUNCH THEOREMS—A LITTLE MATHEMATICS

It is noted that Dembski’s first no free lunch theorem is correct. It is based upon any given algorithm performed m times. The result will be a time-ordered sample set d comprised of m measured values of f within the range Y. Let P be the conditional probability of having obtained a given sample after m iterations, for given f, Y, and m.

Then, the first equation is

Mathwhen a1 or a2 are two different algorithms.

Perakh emphasizes this summation is performed over “all possible fitness functions.”   In other words, Dembski’s first theorem proves that when algorithms are averaged over all possible fitness landscapes the results of a given search are the same for any pair of algorithms.  This is the most basic of Dembski’s theorems, but the most limited for application purposes.

The second equation applies the first one for time-dependent landscapes.  Perakh notes several difficulties in the no free lunch theorems including the fact that evolution is a “coevolutionary” process.  In other words, Dembski’s theorems apply to ecosystems that involve a set of genomes all searching for the same fixed fitness function.  But, Perakh argues that in the real biological world, the search space changes after each new generation.  The genome of any given population slightly evolves from one generation to the next.  Hence, the search space that the genomes are searching is modified with each new generation.

Chess

The game of Chess is played one successive procedural (evolutionary) step at a time. With each successive move (mutation) on the chessboard, the chess-playing algorithm must search for a different and new board configuration as to the next move the computer program (natural selection) should select for.

The no free lunch models discussed here are comparable to the computer chess game mentioned above.   With each slight modification (Darwinian gradualism) in the step by step process of the chess game, the pieces end up in different locations on the chessboard so that the search process starts all over again with a new and different search for a new target than the preceding search.

There is one optimum move that is better than others, which might be a preferred target.  Any other reasonable move on the chessboard is a fitness function.  But, the problem in evolution is not as clear. Natural selection is not only blind, and therefore conducts a blind search, but does not know what the target should be either.

Where Perakh is leading to with this foundation is he is going to suggest in the next section that given a target up front, like the chess solving algorithm has, there might be enough information  in the description of the target itself to assist the algorithm to succeed in at least locating a fitness function.  Whether Perakh is correct or not can be tested by applying the math.

As aforementioned, subsequent to the publication of Perakh’s book, Dembski ultimately provided the supplemental math to cure these issues in his paper entitled, “Searching Large Spaces: Displacement and the No Free Lunch Regress” (March 2005), which is available for review here.  It should also be noted that Perakh concludes this section of the chapter by admitting that the No Free Lunch theorem “are certainly valid for evolutionary algorithms.”

7.  THE DISPLACEMENT PROBLEM

As already mentioned, the no free lunch theorems show that for evolutionary algorithms to output CSI they first received a prior input of CSI.  There’s a term to describe this. It’s called displacement.  Dembski wrote in a paper entitled “Evolution’s Logic of Credulity:
An Unfettered Response to Allen Orr” (2002) the key point of writing No Free Lunch concerns displacement.  The “NFL theorems merely exemplify one instance not the general case.”

Dembski continues to explain displacement,

“The basic idea behind displacement is this: Suppose you need to search a space of possibilities. The space is so large and the possibilities individually so improbable that an exhaustive search is not feasible and a random search is highly unlikely to conclude the search successfully. As a consequence, you need some constraints on the search – some information to help guide the search to a solution (think of an Easter egg hunt where you either have to go it cold or where someone guides you by saying ‘warm’ and ‘warmer’). All such information that assists your search, however, resides in a search space of its own – an informational space. So the search of the original space gets displaced to a search of an informational space in which the crucial information that constrains the search of the original space resides” (Emphasis in the original, http://tinyurl.com/b3vhkt4).

8.  THE IRRELEVANCE OF THE NFL THEOREMS

In the conclusion of his paper, Searching Large Spaces: Displacement and the No Free Lunch Regress (2005), Dembski writes:

“To appreciate the significance of the No Free Lunch Regress in this latter sense, consider the case of evolutionary biology. Evolutionary biology holds that various (stochastic) evolutionary mechanisms operating in nature facilitate the formation of biological structures and functions. These include preeminently the Darwinian mechanism of natural selection and random variation, but also others (e.g., genetic drift, lateral gene transfer, and symbiogenesis). There is a growing debate whether the mechanisms currently proposed by evolutionary biology are adequate to account for biological structures and functions (see, for example, Depew and Weber 1995, Behe 1996, and Dembski and Ruse 2004). Suppose they are. Suppose the evolutionary searches taking place in the biological world are highly effective assisted searches qua stochastic mechanisms that successfully locate biological structures and functions. Regardless, that success says nothing about whether stochastic mechanisms are in turn responsible for bringing about those assisted searches.” (http://www.designinference.com/documents/2005.03.Searching_Large_Spaces.pdf).

Up until this juncture, Perakh admits, “Within the scope of their legitimate interpretation—when the conditions assumed for their derivation hold—the NFL theorems certainly apply” to evolutionary algorithms.  The only question so far in his critique up until this section was that he has argued the NFL theorems do not hold in the case of coevolution.  However, subsequent to this critique, Dembski resolved those issues.

Here, Perakh reasons that even if the NFL theorems were valid for coevolution, he still rejects Dembski’s work because they are irrelevant.  According to Perakh, if evolutionary algorithms can outperform random sampling, aka a “blind search,” then the NFL theorems are meaningless.  Perakh bases this assertion on the statement by Dembski on page 212 of No Free Lunch, which provides, “The No Free Lunch theorems show that evolutionary algorithms, apart from careful fine-tuning by a programmer, are no better than blind search and thus no better than pure chance.”

Therefore, for Perakh, if evolutionary algorithms refute this comment by Dembski by outperforming a blind search, then this is evidence the algorithms are capable of generating CSI.  If evolutionary algorithms generate CSI, then Dembski’s NFL theorems have been soundly falsified, along with ID Theory as well.  If such were the case, then Perakh would be correct, the NFL theorems would indeed be irrelevant.

Perakh rejects the intelligent design “careful fine-tuning by a programmer” terminology in favor of just as reasonable of a premise:

“If, though, a programmer can design an evolutionary algorithm which is fine-tuned to ascend certain fitness landscapes, what can prohibit a naturally arising evolutionary algorithm to fit in with the kinds of landscape it faces?” (Page 19)

Perakh explains how his thesis can be illustrated:

“Naturally arising fitness landscapes will frequently have a central peak topping relatively smooth slopes. If a certain property of an organism, such as its size, affects the organism’s survivability, then there must be a single value of the size most favorable to the organism’s fitness. If the organism is either too small or too large, its survival is at risk. If there is an optimal size that ensures the highest fitness, then the relevant fitness landscape must contain a single peak of the highest fitness surrounded by relatively smooth slopes” (Page 20).

The graphs in Fig. 11.1 schematically illustrate Perakh’s thesis:

Fitness Function

This is Figure 11.1 in Perakh’s book – Fitness as a function of some characteristic, in this case the size of an animal. Solid curve – the schematic presentation of a naturally arising fitness function, wherein the maximum fitness is achieved for a certain single-valued optimal animal’s size. Dashed curve – an imaginary rugged fitness function, which hardly can be encountered in the existing biosphere.

Subsequent to Perakh’s book published in 2004, Dembski did indeed resolve the issue raised here in his paper, “Conservation of Information in Search: Measuring the Cost of Success” (Sept. 2009), http://evoinfo.org/papers/2009_ConservationOfInformationInSearch.pdf. Dembski’s “Conservation of Information” paper starts with the foundation that there have been laws of information already discovered, and that idea’s such as Perakh’s thesis were falsified back in 1956 by Leon Brillouin, a pioneer in information theory.   Brillouin wrote, “The [computing] machine does not create any new information, but it performs a very valuable transformation of known information” (L. Brillouin, Science and Information Theory. New York: Academic, 1956).

In his paper, “Conservation of Information,” Dembski and his coauthor, Robert Marks, go on to demonstrate how laws of conservation of information render evolutionary algorithms incapable of generating CSI as Perakh had hoped for.  Throughout this chapter, Perakh continually cited the various works of information theorists, Wolpert and Macready.  On page 1051 in “Conservation of Information” (2009), Dembski and Marks also quote Wolpert and Macready:

“The no free lunch theorem (NFLT) likewise establishes the need for specific information about the search target to improve the chances of a successful search.  ‘[U]nless you can make prior assumptions about the . . . [problems] you are working on, then no search strategy, no matter how sophisticated, can be expected to perform better than any other.’  Search can be improved only by “incorporating problem-specific knowledge into the behavior of the [optimization or search] algorithm” (D. Wolpert and W. G. Macready, ‘No free lunch theorems for optimization,’ IEEE Trans. Evol. Comput., vol. 1, no. 1, pp. 67–82, Apr. 1997).”

In “Conservation of information” (2009), Dembski and Marks resoundingly demonstrate how conservation of information theorems indicate that even a moderately sized search requires problem-specific information to be successful.  The paper proves that any search algorithm performs, on average, as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure.

Throughout “Conservation of information” (2009), the paper discusses evolutionary algorithms at length:

“Christensen and Oppacher note the ‘sometimes outrageous claims that had been made of specific optimization algorithms.’ Their concern is well founded. In computer simulations of evolutionary search, researchers often construct a complicated computational software environment and then evolve a group of agents in that environment. When subjected to rounds of selection and variation, the agents can demonstrate remarkable success at resolving the problem in question.  Often, the claim is made, or implied, that the search algorithm deserves full credit for this remarkable success. Such claims, however, are often made as follows: 1) without numerically or analytically assessing the endogenous information that gauges the difficulty of the problem to be solved and 2) without acknowledging, much less estimating, the active information that is folded into the simulation for the search to reach a solution.” (Conservation of information, page 1058).

Dembski and Marks remind us that the concept Perakh is suggesting for evolutionary algorithms to outperform a blind search is the same scenario in the analogy of the proverbial monkeys typing on keyboards.

The monkeys at typewriters is a classic analogy to describe the chances of evolution being successful to achieve specified complexity.

A monkey at a typewriter is a good illustration for the viability of random evolutionary search.  Dembski and Marks run the calcs for good measure using factors of 27 (26 letter alphabet plus a space) and a 28 character message.  The answer is 1.59 × 1042, which is more than the mass of 800 million suns in grams.

In their Conclusion, Dembski and Marks state:

 “Endogenous information represents the inherent difficulty of a search problem in relation to a random-search baseline. If any search algorithm is to perform better than random search, active information must be resident. If the active information is inaccurate (negative), the search can perform worse than random. Computers, despite their speed in performing queries, are thus, in the absence of active information, inadequate for resolving even moderately sized search problems. Accordingly, attempts to characterize evolutionary algorithms as creators of novel information are inappropriate.” (Conservation of information, page 1059).

9.  THE DISPLACEMENT “PROBLEM”

This argument is based upon the claim by Dembski in page 202 of his book, “No Free Lunch, “ in which he states, “The significance of the NFL theorems is that an information-resource space J does not, and indeed cannot, privilege a target T.”  However, Perakh highlights a problem with Dembski’s statement because the NFL theorems contain nothing about any arising ‘information-resource space.’  If Dembski wanted to introduce this concept within the framework of the NFL theorems, then he should have at least shown what the role of an “information-resource space” is in view of the “black-box” nature of the algorithms in question.

On page 203 of No Free Lunch, Dembski introduces the displacement problem:

“… the problem of finding a given target has been displaced to the new problem of finding the information j capable of locating that target. Our original problem was finding a certain target within phase space. Our new problem is finding a certain j within the information-resource space J.”

Perakh adds that the NFL theorems are indifferent to the presence or absence of a target in a search, which leaves the “displacement problem,” with its constant references to targets, hanging in the air.

Dembski’s response is as follows:

What is the significance of the Displacement Theorem? It is this. Blind search for small targets in large spaces is highly unlikely to succeed. For a search to succeed, it therefore needs to be an assisted search. Such a search, however, resides in a target of its own. And a blind search for this new target is even less likely to succeed than a blind search for the original target (the Displacement Theorem puts precise numbers to this). Of course, this new target can be successfully searched by replacing blind search with a new assisted search. But this new assisted search for this new target resides in a still higher-order search space, which is then subject to another blind search, more difficult than all those that preceded it, and in need of being replaced by still another assisted search.  And so on. This regress, which I call the No Free Lunch Regress, is the upshot of this paper. It shows that stochastic mechanisms cannot explain the success of assisted searches.

“This last statement contains an intentional ambiguity. In one sense, stochastic mechanisms fully explain the success of assisted searches because these searches themselves constitute stochastic mechanisms that, with high probability, locate small targets in large search spaces. Yet, in another sense, for stochastic mechanisms to explain the success of assisted searches means that such mechanisms have to explain how those assisted searches, which are so effective at locating small targets in large spaces, themselves arose with high probability.  It’s in this latter sense that the No Free Lunch Regress asserts that stochastic mechanisms cannot explain the success of assisted searches.” [Searching Large Spaces: Displacement and the No Free Lunch Regress (2005)].

Perakh makes some valid claims.  About seven years later after the publication of Perakh’s book, Dembski provided updated calcs to the NFL theorems and his application of math to the displacement problem.  This is available for review in his paper, “The Search for a Search: Measuring the Information Cost of Higher Level Search” (2010).

Perakh discusses the comments made by Dembski to support the assertion CSI must be necessarily “smuggled” or “front-loaded” into evolutionary algorithms.  Perakh outright rejects Dembski’s claims, and proceeds to dismiss Dembski’s work on very weak grounds in what appears to be a hand-wave, begging the question as to how the CSI was generated in the first place, and overall circular reasoning.

Remember that the basis of the NFL theorems is to show that when CSI shows up in nature, it is only because it originated earlier in the evolutionary history of that population, and got smuggled into the genome of a population by regular evolution.   The CSI might have been front-loaded millions of years earlier in the biological ancestry.  The front-loading of the CSI may have occurred possibly in higher taxa.  Regardless from where the CSI originated, the claim by Dembski is that the CSI appears visually now because it was inserted earlier because evolutionary processes do not generate CSI.

The smuggling forward of CSI in the genome is called displacement.  The reason why the alleged law of nature called displacement occurs is because when applying Information Theory to identify CSI, the target of the search theorems is the CSI itself.  Dembski explains,

“So the search of the original space gets displaced to a search of an informational space in which the crucial information that constrains the search of the original space resides. I then argue that this higher-order informational space (‘higher’ with respect to the original search space) is always at least as big and hard to search as the original space.” (Evolution’s Logic of Credulity: An Unfettered Response to Allen Orr, 2012.)

It is important to understand what Dembski means by displacement here because Perakh distorts displacement to mean something different in this section.  Perakh asserts:

“An algorithm needs no information about the fitness function. That is how the ‘black-box’ algorithms start a search. To continue the search, an algorithm needs information from the fitness function. However, no search of the space of all possible fitness function is needed. In the course of a search, the algorithm extracts the necessary information from the landscape it is exploring. The fitness landscape is always given, and automatically supplies sufficient information to continue and to complete the search.” (Page 24)

To support these contentions, Perakh references Dawkins’s weasel algorithm for comparison.  The weasel algorithm, says Perakh, “explores the available phrases and selects from them using the comparison of the intermediate phrases with the target.” Perakh then argues the fitness function has in the weasel example the built-in information necessary to perform the comparison.  Perakh then concludes,

“This fitness function is given to the search algorithm; to provide this information to the algorithm, no search of a space of all possible fitness functions is needed and therefore is not performed.” (Emphasis in original, Page 24)

If Perakh is right, then the same is true for natural evolutionary algorithms. Having bought his own circular reasoning he then declares that his argument therefore renders Dembski’s “displacement problem” to be “a phantom.” (Page 24)

One of the problems with this argument is that Perakh admits that there is CSI, and offers no explanation as to how it originates and increases in the genome of a population that results in greater complexity.  Perakh is begging the question.  He offers no math, no algorithm, no calcs, no example.  He merely imposes his own properties of displacement upon the application, which is a strawman argument, and then shoots down displacement.  There’s no attempt to derive how the algorithm ever finds the target in the first place, which is disappointing given that Dembski provides the math to support his own claims.

Perakh appears to be convinced that evolutionary algorithmic searches taking place in the biological world are highly effective assisted searches that successfully locate target biological structures and functions.  And, as such, he is satisfied that these evolutionary algorithms can generate CSI. What Perakh needs to remember is that a genuine evolutionary algorithm is still a stochastic mechanism. The hypothetical success of the evolutionary algorithm says nothing about whether stochastic mechanisms are in turn responsible for bringing about those assisted searches.  Dembski explains,

“Evolving biological systems invariably reside in larger environments that subsume the search space in which those systems evolve. Moreover, these larger environments are capable of dramatically changing the probabilities associated with evolution as occurring in those search spaces. Take an evolving protein or an evolving strand of DNA. The search spaces for these are quite simple, comprising sequences that at each position select respectively from either twenty amino acids or four nucleotide bases. But these search spaces embed in incredibly complex cellular contexts. And the cells that supply these contexts themselves reside in still higher-level environments.” [Searching Large Spaces: Displacement and the No Free Lunch Regress (2005), pp. 31-32]

Dembski argues that the uniform probability on the search space almost never characterizes the system’s evolution, but instead it is a nonuniform probability that brings the search to a successful conclusion.  The larger environment brings upon the scenario the nonuniform probability.  Dembski notes that Richard Dawkins made the same point as Perakh in Climbing Mount Improbable (1996).  In that book, Dawkins argued that biological structures that at first appearance seem impossible with respect to the uniform probability, blind search, pure randomness, etc., become probable when the probabilities are reset by evolutionary mechanisms.

Propagation

This diagram shows propagation of active information
through two levels of the probability hierarchy.

The kind of search Perakh presents is also addressed in “The Search for a Search: Measuring the Information Cost of Higher Level Search” (2010).  The blind search Perakh complains of is that of uniform probability.  In this kind of problem, given any probability measure on Ω, Dembski’s calcs indicate the active entropy for any partition with respect to a uniform probability baseline will be nonpositive (The Search for a Search, page 477).  We have no information available about the search in Perakh’s example.  All Perakh gives us is that the fitness function is providing the evolutionary algorithm clues so that the search is narrowed.  But, we don’t know what that information is.  Perakh’s just speculating that given enough attempts, the evolutionary algorithm will get lucky and outperform the blind search.  Again, this describes uniform probability.

According to Dembski’s much intensified mathematical analysis, if no information about a search exists so that the underlying measure is uniform, which matches Perakh’s example, “then, on average, any other assumed measure will result in negative active information, thereby rendering the search performance worse than random search.” (The Search for a Search, page 477).

Dembski expands on the scenario:

“Presumably this nonuniform probability, which is defined over the search space in question, splinters off from richer probabilistic structures defined over the larger environment. We can, for instance, imagine the search space being embedded in the larger environment, and such richer probabilistic structures inducing a nonuniform probability (qua assisted search) on this search space, perhaps by conditioning on a subspace or by factorizing a product space. But, if the larger environment is capable of inducing such probabilities, what exactly are the structures of the larger environment that endow it with this capacity? Are any canonical probabilities defined over this larger environment (e.g., a uniform probability)? Do any of these higher level probabilities induce the nonuniform probability that characterizes effective search of the original search space? What stochastic mechanisms might induce such higher-level probabilities?  For any interesting instances of biological evolution, we don’t know the answer to these questions. But suppose we could answer these questions. As soon as we could, the No Free Lunch Regress would kick in, applying to the larger environment once its probabilistic structure becomes evident.” [Searching Large Spaces: Displacement and the No Free Lunch Regress (2005), pp. 32]

The probabilistic structure would itself require explanation in terms of stochastic mechanisms.  And, the No Free Lunch Regress blocks any ability to account for assisted searches in terms of stochastic mechanisms. (“Searching Large Spaces: Displacement and the No Free Lunch Regress” (2005).

Today, Dembski has updated his theorems to present by supplying additional math and contemplations.  The NFL theorems today are analyzed in both a vertical and horizontal considerations in three-dimensional space.

3-D Geometry

3-D Geometric Application of NFL Theorems

This diagram shows a three dimensional simplex in {ω1, ω2, ω3}The numerical values of a1, a2 and a3 are one.  The 3-D box in the figure presents two congruent triangles in a geometric approach to presenting a proof of the Strict Vertical No Free Lunch Theorem.  In “The Search for a Search: Measuring the Information Cost of Higher Level Search” (2010), the NFL theorems are analyzed both horizontally and vertically.  The Horizontal NFL Theorem pertains to showing the average relative performance of searches never exceeds unassisted or blind searches.  The Vertical NFL Theorem shows that the difficulty of searching for a successful search increases exponentially with respect to the minimum allowable active information being sought.

This leads to the displacement principle, which holds that “the search for a good search is at least as difficult as a given search.”   Perakh might have raised a good point, but Dembski’s done the math and confirmed his theorems are correct.  Dembski’s math does work out, he’s provided the proofs, and shown the work.  On the other hand, Perakh merely offered an argument that was nothing but an unverified speculation with no calcs to validate his point.

V.  CONCLUSION

In the final section of this chapter, Perakh reiterates the main points throughout his article for review. He begins by saying,

“Dembski’s critique of Dawkins’s ‘targeted’ evolutionary algorithm fails to repudiate the illustrative value of Dawkins’s example, which demonstrates how supplementing random changes with a suitable law increases the rate of evolution by many orders of magnitude.” (Page 25)

No, this is a strawman.  There was nothing Perakh submitted to establish such a conclusion.  Neither Dembski or the Discovery Institute have any dispute with Darwinian mechanisms of evolution.  The issue is whether ONLY such mechanisms are responsible for specified complexity (CSI).  Intelligent Design proponents do not challenge that “supplementing random changes with a suitable law increases the rate of evolution by many orders of magnitude.”

Next, Perakh claims, “Dembski ignores Dawkins’s targetless’ evolutionary algorithm, which successfully illustrates spontaneous increase of complexity in an evolutionary process.” (Page 25).

No, this isn’t true.  First, Dembski did not ignore the Dawkins’ weasel algorithm.  Second, the weasel algorithm isn’t targetless.  We’re given the target up front.  We know exactly what it is.  Third, the weasel algorithm did not show any increase in specified complexity. All the letters in the sequence already existed. When evolution operates in the real biological world, the genome of the population is reshuffled from one generation to the next.  No new information is increasing leading to greater complexity.  The morphology is a result from the same information being rearranged.

In the case of the Weasel example, the target was already embedded in the original problem, just like one and only one full picture is possible to assemble from pieces of a jigsaw puzzle.  When the puzzle is completed, not one piece should be missing, unless one was lost, and there should not be one extra piece too many.  The CSI was the original picture that was cut up into pieces to be reassembled.  The Weasel example is actually a better illustration for front-loading.  All the algorithm had to do was figure out how to arrange the letters back into the proper intelligible sequence.

The CSI was specified in the target or fitness function up front to begin with.  The point of the NFL theorems indicates that if the weasel algorithm was a real life evolutionary example, then that complex specified information (CSI) would have been inputted into the genome of that population in advance.  But, the analogy quickly breaks down for many reasons.

Perakh then falsely asserts, “Contrary to Dembski’s assertions, evolutionary algorithms routinely outperform a random search.”  (Page 25). This is false.  Perakh speculated that this was a possibility, and Dembski clearly not only refuted it, but demonstrated that evolutionary algorithms essentially never outperform a random search.

Perakh next maintains:

“Contrary to Dembski assertion, the NFL theorems do not make Darwinian evolution impossible. Dembski’s attempt to invoke the NFL theorems to prove otherwise ignores the fact that these theorems assert the equal performance of all algorithms only if averaged over all fitness functions.” (Page 25).

No, there’s no such assertion by Dembski.  This is nonsense.  Intelligent Design proponents do not assert any false dichotomy.  ID Theory supplements evolution, providing the conjecture necessary to really explain the specified complexity.  Darwinian evolution still occurs, but it only explains inheritance and diversity.  It is ID Theory that explains complexity.  As far as the NFL theorems asserting the equal performance of all or any algorithms to solve blind searches, this is ridiculous and never was established by Perakh.

Perakh also claims:

“Dembski’s constant references to targets when he discusses optimization searches are based on his misinterpretation of the NFL theorems, which entail no concept of a target. Moreover, his discourse is irrelevant to Darwinian evolution, which is a targetless process.” (Page 25).

No, Dembski did not misinterpret the very NFL theorems that he invented.  The person that misunderstands and misrepresents them is Perakh.  It is statements like this that cause one to suspect of Perakh understands what CSI might be, either.  If you notice the trend in his writing, when Perakh looked for support for an argument, he referenced those who have authored rebuttals in opposition to Dembski’s work.  But, when Perakh looked for an authority to explain the meaning of Dembski’s work, Perakh nearly always cited Dembski himself.  Perakh never performs any math to support his own challenges.  Finally, Perakh ever established anywhere that Dembski misunderstood or misapplied any of the principles of Information Theory.

Finally, Perakh ends the chapter with this gem:

“The arguments showing that the anthropic coincidences do not require the hypothesis of a supernatural intelligence also answer the questions about the compatibility of fitness functions and evolutionary algorithms.” (Page 25).

This is a strawman.  ID Theory has nothing to do with the supernatural.  If it did, then it would not be a scientific theory by definition of science, which is bases upon empiricism.   As one can certainly see is obvious in this debate is that Intelligent Design theory is more aligned to Information Theory than most sciences.  ID Theory is not about teleology, but is more about front-loading.

William Dembski’s work is based upon pitting “design” against chance. In his book, The Design Inference he used mathematical theorems and formulas to devise a definition for design based upon a mathematical probability. It’s an empirical way to work with improbable complex information patterns and sequences. It’s called specified complexity, or aka complex specified information (CSI). There’s no contemplation as to the source of the information other than it being front-loaded.  ID Theory only involves a study of the information (CSI) itself. Design = CSI. We can study CSI because it is observable.

There is absolutely no speculation of any kind to suggest that the source of the information is by extraterrestrial beings or any other kind of designer, natural or non-natural. The study is only the information (CSI) itself — nothing else. There are several non-Darwinian conjectures as to how the information can develop without the need for designers.  Other conjectures are panspermia, natural genetic engineering, and what’s called “front-loading.”

In ID, “design” does not require designers. It can be equated to be derived from “intelligence” as per William Dembski’s book, “No Free Lunch,” but he uses mathematics to support his work, not metaphysics. The intelligence could be illusory. All the theorems detect is extreme improbability because that’s all the math can do. And, it’s called “Complex Specified Information.” It’s the Information that ID Theory is about. There’s no speculation into the nature of the intelligent source, assuming that Dembski was right in determining the source is intelligent in the first place. All it takes really is nothing other than a transporter of the information, which could be an asteroid, which collides with Earth carrying complex DNA in the genome of some unicellular organism. You don’t need a designer to validate ID Theory; ID has nothing to do with designers except for engineers and intelligent agents that are actually observable.

Posted in COMPLEX SPECIFIED INFORMATION (CSI) | Tagged , , , , , , , , | 1 Comment