Is Consciousness Quantifiable and Computable?

Map of Neural Circuits in the Human Brain

Map of Neural Circuits in the Human Brain

This image is an actual photograph.  It is not a printout of the artwork your 9th grader brought home from school from their Apple computer software class. It’s a Map of Neural Circuits in the Human Brain performed by the Human Connectome Project.  You can view their fascinating work here.

This was a classic study confirming Intelligent Design Theory works when put to the test, as ID Theory always has withstood fierce scrutiny and testing in the past. ID Theory held up to all four of Michael Behe’s predictions of irreducible complexity in Darwin’s black Box (1996), and was confirmed many times after that. The most recent memorable occasion is when the predictions made by Jonathan Wells in his book, “The Myth of Junk DNA” was confirmed by the findings of the ENCODE Project in September 2012.

This time, it has not only been determined that information is quantifiable, which leads the way to further research started by William Dembski upon discovering “Complex Specified Information,” but now the ability to quantify consciousness itself has been realized. It’s an exciting era of the history of biology to be alive in to see this unfold in our lifetime. 

Wired Science covered this story here. It reads,

“There’s a theory, called Integrated Information Theory, developed by Giulio Tononi at the University of Wisconsin, that assigns to any one brain, or any complex system, a number — denoted by the Greek symbol of Φ — that tells you how integrated a system is, how much more the system is than the union of its parts. Φ gives you an information-theoretical measure of consciousness. Any system with integrated information different from zero has consciousness. Any integration feels like something.

It’s not that any physical system has consciousness. A black hole, a heap of sand, a bunch of isolated neurons in a dish, they’re not integrated. They have no consciousness. But complex systems do. And how much consciousness they have depends on how many connections they have and how they’re wired up.”


Neuroscientist Giulio Tononi’s research leads the way in efforts to quantify consciousness and apply calcs factoring in consciousness in performing computations.

Giulio Tononi is a neuroscientist and psychiatrist who holds the David P. White Chair in Sleep Medicine, as well as a Distinguished Chair in Consciousness Science, at the University of Wisconsin.

More on Integrated Information Theory,

Research paper on the subject,

This takes William Dembski’s work to a new level of research. The breakthrough of the instant study is that the ability to quantify consciousness into measurable units gives us yet one more way to compare the difference of specified complexity from two different examples of design.

Quantifying specified complexity is important because it assists design theorists to be able to compare one sample of design in terms of CSI with other samples. Evolution is a process where a genome, (one dataset sample) increases in specified complexity to a more sophisticated configuration, or degree of design. The only scientists who seem to care about this quantifying ability are those who work in fields related to synthetic biology, bioinformatics, and intelligent design.

With respect to ID, the ease of testing CSI is dramatically improved upon being capable to compare different quantifies of complex specified information. This was a drawback in the earlier years of Dembski’s career.

Just recently, a quantum physicist by the name of Daegene Song commented on his research related to this topic.  Song asserts that strong Artificial Intelligence is key to answering fundamental brain science questions.  I agree with him.  However, Song goes on to conclude that consciousness does not compute, and never will.

Daegene Song (PRNewsFoto/Daegene Song)

Daegene Song (PRNewsFoto/Daegene Song)

Daegene Song obtained his Ph.D. in physics from the University of Oxford and now works at Chungbuk National University in Korea as an assistant professor. To learn more about Song’s research, see his published work: D. Song, Non-computability of Consciousness, NeuroQuantology, Volume 5, pages 382~391 (2007).

This latest report about Song’s comments that consciousness does not compute is misleading. Song is a quantum physicist doing research on quantum computing. The expertise required to comment on consciousness is outside these fields of study. It is neuroscience that is the field that researches consciousness. Song is not qualified to make the assertions he made, and I am entirely not satisfied he knows what consciousness is, or knows what he is talking about. 

I have confidence in Song’s specialties, but knowledge regarding consciousness is not one of his strong suits. Until these areas of science include neuroscientists as part of the research team, the conclusions are meaningless.

Song cited only books, and most of them are outdated. The most recent source he referenced is from 2004. Song is the single sole author of this paper. There are only 9 bibliographical references, and only one paper on neuroscience cited. 

I grant credit to Song that he at least referenced Giulio Tononi’s work, but the only actual science paper he referenced of Tononi is from 1998. I can understand needing to go back to 1998 to be able to cite a paper published in the prestigious Science journal, but when this is the only neuroscience documentation in Song’s final analysis then this is not just poor scholarship, but a lack of relevant research altogether.

This is very substandard scholarship even for a paper that dates back to 2008. Song’s work goes back to 2008 (last revision of his 2007 paper). We know far more now on this subject than we did in 2008.

Tononi does lead the way in quantifying consciousness, and inspiring research in the field of neuroscience to indeed apply the math and calculations. Based upon this paper by Song, we have no idea whether the unit of consciousness represented by theta θ will apply to quantum computing or not. This paper provides no new information than what we already knew before; his paper is outdated information. 

Even if Song is correct, his opinion only applies to quantum computing. Quantum computing is more closely related to computer science than it is quantum physics. Song’s unqualified and layman’s concept of what he thinks consciousness might be is inapplicable to actual research on the topic in neuroscience.

Until the correct team does this research right, with an increased effort to apply more comprehensive study of updated sources, then we will never know the correct conclusions of this report. 

Without at least one neuroscientist on that research team, then the study is a sham and it would be highly unlikely to pass peer review in any reputable science journal, of which this paper never did. NeuroQuantology is a very low impact journal. According to the 2013 Journal Citation Reports, the journal NeuroQuantology had an impact factor of 0.439, ranking it 240th out of 251 journals in the category “Neuroscience.”

The problem is you cannot arrive at that conclusion the MATH says that NEUROSCIENCE will NEVER be able to achieve it’s GOAL because that is a determination to be made by neuroscientists, not quantum physicists. It is irrelevant that consciousness is not produced by the brain. I hold that neither is intelligence limited to being produced by a brain either. The approach Song used is not consistent with Tononi’s work, and is based upon outdated research and unnecessary variables. 

To give you an idea of how futile Song’s opinion is, it would be like the Miller-Urey experiment. Song put his ingredients of what he thinks consciousness would look like expressed mathematically, ran the math, and the numbers failed to compute. That means nothing.

We did that all the time in first year engineering school. We kept on trying to write programs until we finally got them to run. Song made an attempt in 2007, and then arrived at a false conclusion. Moreover, the conclusion he arrived at is so far removed and different than the actual advances being made in neuroscience today regarding the topic that it is nonsense to compare them. It would be like trying to call a dolphin a fish because it looks like one.

If the math doesn’t work, you keep running calcs until it does work. That’s what Paul Davies is doing with origin of life studies at Ariz. St. Univ based upon information theory/bioinformatics; same field as William Dembski’s work. That’s what the string theorists are doing, too. You don’t quit and say it will never happen, that is not how science works.

It should be noted that Song never set out to vindicate efforts that consciousness can be factored into math calculations. He approached the task from an artificial intelligence application in quantum computing. The results Song desired to obtain was whether it is possible to develop some kind of AI software that inputs artificial consciousness into similar programming. His work ran into what he perceives to be a dead end. Maybe it is a dead end, we don’t know yet. 

It is a dead end as far as Song is concerned, but he based this conclusion on incomplete data from the field of neursoscience. I don’t criticize him for this; neursoscience is not Song’s area of expertise.

Consciousness is not the specialty of Song. He had to reference work done in neuroscience by expert researcher G. Tononi who still leads the work in quantifying consciousness to this very day. Song’s research was in 2008. There have been new discoveries and breakthroughs by G. Tononi since then.  As I mentioned above, this research is far from be over and complete. And, the most important point of all is that these two lines of research are so vastly different they cannot be compared.

Song found his efforts ran into limits, but only applied to quantum computing. That doesn’t stop new predictions being modified and researched. And, nothing about Song falsified subsequent research done by Tononi that contradicts the sensational title of the article that “Consciousness does not compute and never will.”  It appears to be a typical journalistic gimmick to capture the attention of readers and draw them to the article.  I have my suspicions that Song intended to represent the extremist opinion that is conveyed in the title of the article. 

I have several illustrations about math computations to make my point.  For example, the math that supports the Big Bang, math that supports string theory, math that supports origin of life predictions, and the math that supports Einstein’s theory of gravity.

It took decades for the math to be resolved in support of the Big Bang. While string theory has been around many years, the theorists are still working on the math to this very day. 

Origin of life conjectures have also been around decades, and scientists still have not been able to get the math to work, even though there most likely is a solution. If the math doesn’t run, then no mechanism is discoverable. If the math eventually does work for one of the models, then finally this would yield evidence that a mechanism exists and is discoverable.

Newton’s math regarding gravity was falsified by Einstein. It is possible Einstein’s math might be falsified yet again in the future We don’t know yet whether those calcs apply in black holes.

Regardless of which analogy I used to compare the math calcs used to support a scientific conjecture, each instance was the same in that it took scores of years to perfect the math, and working the computations is endless because new scientific discoveries add more variables to the equations. These are rigorous computations often involving several pages of proofs, theorems, etc. Not all the formulas and theorems are absolute. 

Song appears to oversimplify these vastly different research areas involving entirely different mathematical approaches and calcs to make it sound like there is one and only one possible short and sweet computation to solve.  His comments could be taken to suggest that the moment that this one solution to the math problem is obtained, that there is nothing left to do as if this is some ultimate invincible and irrefutable conclusion. That is a science stopper. That is not how science works. Scientists make new predictions, and investigate new methods to surmount barriers and hurdles. 

If the arch bridge cannot be built, then try beams. If beams don’t work, then try cables. If a cable bridge cannot be built, then try a trestle bridge. If that doesn’t work, perhaps a suspension bridge will work. Each bridge will have different engineering calcs. Each approach changes the math, as the entire math computation is completely different. 

Someone who could take Song’s comments to falsify Tononi make it sound like I am saying if you try to add 2 + 2 + 3 for a long enough time then maybe someday you might get a different answer than 7 after the millionth try. I never represented any kind of absurdity of the like. There are hundreds if not thousands of ways to express consciousness mathematically. When one treatment doesn’t fit, then try a different approach, which has an entirely different application and computation to resolve. I never suggested keep trying to solve the same identical math problem as before.

Song’s paper was last revised in 2008. Most of Tononi’s breakthrough discoveries in quantifying consciousness is more recent research. I made this point several times above already. You can’t have an older paper refute a more recent paper; it’s the other way around.

Although I already linked to it above, here is the Tononi paper again. Keep in mind that in order for Song to have a paper to cite from neuroscience, Song’s only neuroscience paper referenced was Tononi’s work from 1998. And, I repeat, Song’s paper was published in 2007. This paper from G. Tononi is 2012.

While some people might conclude otherwise, Mr. Song no way, no how ran into a roadblock for being able to compute consciousness. His computation was one of an infinite amount of approaches to consciousness in the field of artificial intelligence. Moreover, his research has nothing to do with the advances made in neuroscience. This is an argument about apples and oranges. These different fields have different goals, objectives, methodologies, and areas of expertise.

Song’s work attempted to perform a computation, one of an infinite amount of approaches to MIMIC consciousness in the field of artificial intelligence.  That objective is an ENTIRELY DIFFERENT GOAL that the areas of research in neuroscience that investigate the role of neurons and many other topics related to consciousness and intelligence. 

On the ID-Theoretical Biology, there is not nearly one day that goes by where there are posts that are connected with these subjects. Yes, AI is an extremely important field of study of interest to Intelligent Design, but AI is an applied science. The theories it is based upon comes from other academic disciplines such as computer science.

Song’s focus was to achieve some watershed breakthrough for AI in the area of quantum computing. That is an isolated direction of countless leads to advance these study areas. To choose one road and arrive at a dead end is meaningless. Just turn around, go back to the intersection and follow a new direction of research. That is how science works.


I see ID Theory as described and defined by the Discovery Institute to be an entirely fit scientific theory of which although it is falsifiable, it has not yet been falsified after 19 years.

I could assert several plausible arguments for why I hold ID Theory to be legitimate science. But, the relevant argument I adhere to, specifically, is that it is a DESIGN-inspired prediction based upon ID Theory that an INTELLIGENT CAUSE is a BETTER EXPLANATION for the origin and diversity of life than natural selection. Is that true or false? Is this testable or not?  Many conclude intelligent design is not testable. I say yes, it is testable. I assert this prediction based upon advances being made in neuroscience where there is surfacing a workable and testable definition of intelligence in the SCIENTIFIC LITERATURE thanks to that field. 

I insist that these processes of natural genetic engineering (; and cell cognition are intelligent-like, and it will just be a matter of time that studies in fields related to neuroscience will be an ultimate deciding factor in determining whether the multiple coordinated mutation events that change allele frequencies in biological populations, and the processes that cause them, are being performed by the work of an identifiable and observable intelligent agency. This prediction remains to be seen. More research is required.

Just as INTELLIGENCE itself is a study of interest to design theorists so likewise is consciousness. These are not the only target topics of study. Add to the list other properties associated with intelligence aside from consciousness. Add decision-making, communications networks, problem-solving, self-awareness, cognitive activity, and the list goes on. 

It is the field of neuroscience that leads the way in these study areas. There are about 250 science journals related to the field of neuroscience. It is a significant and growing area of scientific research.

I just took one prediction out of countless thousands. I simply chose as an example the work of Giulio Tononi, who has made promising gains in the area of quantifying consciousness.

I do not have a problem for anyone being a fan of ID because they feel ID validates their personal theology. But, what ID means philosophically beyond the scope of science has nothing to do with the contribution ID makes to actual science. 

What I love about ID Theory is there really does exist a condition called irreducible complexity, and no scientist can take credit for the discovery because Michael Behe beat them to it. There really does exist a mathematical definition of design, but no information theorist or bioinfomatics expert can take credit for it because William Dembski already scooped them. 


Mr. Song cited Tononi 1998 in order to support the conclusions in Song 2007, a paper published in Nature. Tononi’s work is published at least to 2012.  As such, Song never falsified Tononi.  This is a very much ALIVE and ACTIVE research field.

Posted in Uncategorized | Tagged , , | Leave a comment

Eight Reasons Why Intelligent Design is Science

Here is a list of 8 bullet topics featuring evidence for testable means that would support Intelligent Design Theory:

1. Complex Specified Information (CSI), aka specified complexity; William Dembski’s No Free Lunch theorems.


Sand Sculpture

A sand sculpture is a good example of CSI.  At some juncture there is enough information evident from some event that we can deduce intelligent design.  It is a safe conclusion that this sand formation is not the product of wind.

2. Michael Behe’s testable predictions regarding Irreducible Complexity.  Molecular biologist Jonathan McLatchie also wrote an essay on this subject.  An irreducibly complex system is one that (a) the removal of a protein renders the molecular machine inoperable, and (b) the biochemical structure has no stepwise evolutionary pathway.  Michael Behe further describes the condition:

“An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.” (A Response to Critics of Darwin’s Black Box, by Michael Behe, PCID, Volume 1.1, January February March, 2002.  Source:

3. Quantum Biology.  Consider this paper.

4. The work of James Shapiro.  One of his discoveries is Natural Genetic Engineering (NGE).  Shapiro wrote this essay to summarize NGE hereThis is a more comprehensive paper on the subject.


Origin of Life

Origin of Life requires the formation of many molecular machines to be installed and running before the existence of the first living and reproducing cell.

5. Origin of Life research based upon Information Theory.  An example is the work done by Paul Davies at Arizona State University.  Here is the press release for their research.  This is their paper, “Algorithmic Origins of Life.” It is an approach to abiogenesis, but from an Information Theory approach instead of chemical evolution.  Live Science also reported on the study.

6.  A special class of natural genetic engineering is cell cognition when the genome of a population is modified by an intelligent cause instead of an unguided process like natural selection or horizontal gene transfer.  Cell cognition refers to the decision-making processes that occur at the cellular level.

7. The work of William Dembski in the field of bioinformatics.


The image is a cilium. It was predicted by Michael Behe in 1996 to be irreducibly complex, and the product of multiple coordinated mutations that occurred simultaneously.

8. Design-inspired predictions based upon there being multiple simultaneous mutation events as opposed to gradual successive modifications one mutation at a time.  Design theorists call these multiple coordinated events.  These break the standard successive stepwise modifications that natural selection is based upon.  Some multiple coordinated mutation events result in irreducible complex molecular machinery.

Intelligent Design is a scientific theory, which the Discovery Institute states, “holds that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection.”  Source:

Posted in Uncategorized | 1 Comment



This essay sets a case in favor of the scientific theory of universal common ancestry.  One of the most dubious challenges to universal common descent I have reviewed is Takahiro Yonezawa and Masami Hasegawa, “Some Problems in Proving the Existence of the Universal Common Ancestor of Life on Earth,” The Scientific World Journal, 2011.  While there is nothing wrong with the data and points raised in this article, it is not the objective of science to “prove” a theory.  Also, the objective of identifying the universal common ancestor is not the the focus of the theory of universal common descent.

The scientific method is based upon testing a falsifiable hypothesis.  In science, the researchers do not experiment to “prove” theories, they test an hypothesis in order to falsify the prediction.  All we can do is continue to test gravity to determine if Einstein’s predictions were correct. We can never “prove” Einstein was right because his equations might not work everywhere in the universe, such as inside a black hole.

When an experiment fails to falsify the hypothesis, all we can conclude is that the theory is confirmed one more time. But, the theory is never ultimately proven. If it were possible to prove a theory to be ultimately true, like a law of physics, then it is not a scientific theory because a theory or hypothesis must be falsifiable.

The theory of UCD is challenged with formal research by multiple biology and biochemistry departments around the world. There is a substantial amount of scientific literature on this  area of research.  The fact that after all this time the proposition of UCD has not been falsified is a persuasive case supporting an argument the claim has merit.   That’s all science can do.

I make this point because when we explore controversial topics far too often some individuals make erroneous objections, such as requiring empirical data to “prove” some conjecture.  That is not how science works.  All the scientific method can do is demonstrate a prediction is false, but science can never prove a theory to be absolutely true.

Having said that, there are scientists who nevertheless attempt to construct a complete Tree of Life.  This is done in an ambitious attempt to “prove” the theory is true, even to the fanciful hopes of identifying the actual universal common ancestor.   Much of the attacks on the theory of common descent are criticisms noting the incompleteness of the data.  But, an incomplete tree does not falsify the theory.

This is important to understand because there is no attempt being made here to prove universal common descent (UCD).  All that is going to be shown here is that the UCD as a scientific theory has not been falsified, and remains an entirely solid theory regardless as to whether UCD is actually true or not.


What would it take to prove universal common descent false?

Common ancestry would be falsified if we discovered a form of life that was not related to all other life forms. For example, finding a life form that does not have the nucleic acids (DNA and RNA) would falsify the theory. Other ways to falsify Univ. Common Descent would be:

• If someone found a unicorn, that would falsify universal common descent.

• If someone found a Precambrian rabbit would likely falsify universal common descent.

• If it could be shown mutations are not inherited by successive generations.

One common misunderstanding that people have about science is they have this idea that science somehow proves certain predictions to be correct.

All life forms fall within nested hierarchy. Of the hundreds of thousands of specimens that have been applied testing, every single one of them fall within nested hierarchy, or their evolution phylogenetic tree is still unknown and not sequenced yet.


Here is just a tip of the iceberg of science papers that indicated the validity of the UCD:

• Steel, Mike; Penny, David (2010). “Origins of life: Common ancestry put to the test“. Nature 465 (7295): 168–9.

• A formal test of the theory of universal common ancestry (13 May 2010). “A formal test of the theory of universal common ancestry.” Nature 465 (7295): 219–222.

• Glansdorff, N; Xu, Y; Labedan, B (2008). “The last universal common ancestor: emergence, constitution and genetic legacy of an elusive forerunner.” Biology direct 3 (1): 29.

Céline Brochier, Eric Bapteste, David Moreira and Hervé Philipp, “Eubacterial phylogeny based on translational apparatus proteins,” TRENDS in Genetics Vol.18 No.1 January 2002.

• Baldauf, S. L., Roger, A. J., Wenk-Siefert, I., and Doolittle, W. F. (2000) “A kingdom-level phylogeny of eukaryotes based on combined protein data.” Science 290: 972-7.

• Brown, J. R., Douady, C. J., Italia, M. J., Marshall, W. E., and Stanhope, M. J. (2001) “Universal trees based on large combined protein sequence data sets.” Nature Genetics 28: 281-285.

The above are often cited in support of Univ. Common Descent. For anyone to suggest these papers have been overturned or outdated requires documentation.

Darwin's Sketch of a Cladogram

Darwin’s First Sketch of a Cladogram


A logical prediction that would be inspired by common descent is that all biological development will resemble a tree, which is called the Tree of Life. Evolution then will specifically generate unique, nested, hierarchical patterns of a branching scheme. Most existing species can be organized rather easily in a nested hierarchical classification.

Figure 1. Parts of a Phylogenetic Tree
Figure 1. Parts of a Phylogenetic Tree

Figure 1 displays the various parts of a phylogenetic tree.  Nodes are where branches meet, and represent the common ancestor of all taxa beyond the node. Any life form that has reproduced has a node that will fit properly onto the phylogenetic tree. If two taxa share a closer node than either share with a third taxon, then they share a more recent ancestor.

Falsifying Common Descent:

It would be very problematic if many species were found that combined characteristics of different nested groupings. Some nonvascular plants could have seeds or flowers, like vascular plants, but they do not. Gymnosperms (e.g. conifers or pines) occasionally could be found with flowers, but they never are. Non-seed plants, like ferns, could be found with woody stems; however, only some angiosperms have woody stems.

Conceivably, some birds could have mammary glands or hair; some mammals could have feathers (they are an excellent means of insulation). Certain fish or amphibians could have differentiated or cusped teeth, but these are only characteristics of mammals.

A mix and match of characters would make it extremely difficult to objectively organize species into nested hierarchies. Unlike organisms, cars do have a mix and match of characters, and this is precisely why a nested hierarchy does not flow naturally from classification of cars.

Figure 1.  Sample Cladogram
Figure 2. Sample Cladogram

In Figure 2, we see a sample phylogenetic tree. All a scientist has to do is find a life form that does not fit the hierarchical scheme in proper order. We can reasonably expect that yeasts will not secrete maple syrup.  This model allows us the logical basis to predict that reptiles will not have mammary-like glands.  Plants won’t grow eyes or other animal-like organs. Crocs won’t grow beaver-like teeth. Humans will not have gills or tails.

Reptiles will not have external skeletons. Monkeys will not have a marsupial-like pouch. Amphib legs will not grow octopus-like suction cups.Lizards will not produce apple-like seeds. Iguanas will not exhibit bird feathers, and on it goes.

The phylogenetic tree provides a basis to falsify common descent if, for example, rose bushes grow peach-like fuzz or sponges display millipede-like legs.  We will not find any unicorns or “crockoducks.”  There should never be found any genetic sequences in a starfish that would produce spider-like fangs.  An event such as a whales developing shark-like fins would falsify common descent.

While these are all ludicrous examples in the sense that such phenomena would seemingly be impossible, the point is that any life form found with even the slightest cross-phylum, cross-family, cross-genus kind of body type would instantly falsify common descent. And, it doesn’t have to be a known physical characteristic I just listed. It could be a skeletal change in numbers of digits, ribs, or configurations.  There is an infinite number of possibilities that if such a life form was unclassifiable, the theory of universal common descent would be falsified.

The falsification doesn’t have to be anything as dramatic as these examples. It could be something like when NASA thought it has discovered a new form of life when there was thought to be an arsenic-based bacteria at California’s Mono Lake. This would have been a good candidate to see if the life form had entirely changed its genetic code. Another example would be according to UCD none of the thousands of new and previously unknown insects that are constantly being discovered will have non-nucleic acid genomes.

Certainly, if UCD is invalid, there must be life forms that exist that acquire their characteristics aside from their parents, and if this is so, their DNA will expose the anomaly. It is very clear when reviewing phylogenies that there is an unmistakeable hierarchical structure indicating ancestral lineage. And all phylogenies are like this without exception. All I ask for was there to be simply one submitted that shows a life form does not have any parents, or it’s offspring did not inherit its traits.  If such were the case, then there should be evidence of this.


For the methodology to determine nested hierarchies today, the math gets complicated in order to ensure that the results are accurate.  In this next study, as a discipline, phylogenetics is becoming transformed by a flood of molecular data. This data allows broad questions to be asked about the history of life, but also present difficult statistical and computational problems. Bayesian inference of phylogeny brings a new perspective to a number of outstanding issues in evolutionary biology, including the analysis of large phylogenetic trees and complex evolutionary models and the detection of the footprint of natural selection in DNA sequences.

As this discipline continues to be applied to molecular phylogenies, the prediction is continually confirmed, not falsified. All it would take is one occurrence for the mix and match issue to show a sequence out of order without a nested hierarchy and evolutionary theory would be falsified.


Of course Charles Darwin’s hypothesis of UCD has been questioned.  All scientific predictions are supposed to be challenged. There’s a name for it. It’s called an experiment. The object is to falsify the hypothesis by testing it. If the hypothesis hold ups, then it is confirmed, but never proven. The best science gives you is falsification. UCD has not been falsified, but instead is extremely reliable. 

When an hypothesis is confirmed after repeated experimentation, the science community might upgrade the hypothesis to the status of a scientific theory.   A scientific theory is when an hypothesis that is continuously affirmed after substantial repeated experiments has significant explanatory power to better understand phenomena. 

Here’s another paper in support of UCD, Schenk, MF; Szendro, IG; Krug, J; de Visser, JA (Jun 2012). “Quantifying the adaptive potential of an antibiotic resistance enzyme.”  Many human diseases are not static phenomena, but are constantly evolving, such as viruses, bacteria, fungi and cancers. These pathogens evolve to be resistant to host immune defences, as well as pharmaceutical drugs. (A similar problem occurs in agriculture with pesticides).

This Schenk 2012 paper analyzes whether pathogens are evolving faster than available antibiotics, and attempts to make better predictions of the evolvability of human pathogens in order to devise strategy to slow or circumvent the destructive morphology at the molecular level. Success in this field of study is expected to save lives.

Antibiotics are an example of the necessity to apply phylogenetics in order to implement medical treatments and manufacture pharmaceutical products. Another application is demonstrating irreducible complexity. That is established by studying homologies of different phylogenies to determine whether two systems share a common ancestor. If one has no evolutionary pathway to a common ancestor, then it might be a case for irreducible complexity.

Another application is forensic science. DNA is used to solve crimes. One case involved a murder suspect being found guilty because he parked his truck under a tree. A witness saw the truck at the time of the crime took place. The suspect was linked to the crime scene because DNA from seeds that fell out of that tree into the bed of the truck positively identified the tree from no other tree in the world.

DNA allows us to positively determine ancestors, and the margin for error is infinitesimally small.


The term “nested” refers to the confirmation of the specimen being examined as properly placed in hierarchy on both sides of reproduction, that is both in relation to its ancestors and progeny.  The term “twin” refers to the fact that nested hierarchy can be determined by both (1) genotype (molecular and genome sequencing analysis) and (2) phenotype (visual morphological variations).

We can ask these four questions:

1. Does the specimen fit in a phenotype hierarchy on the ancestral side? Yes or no?

2. Does the specimen fit in a phenotype hierarchy relative to its offspring? Yes or no?

If both answers to 1 and 2 are yes, then nested hierarchy re phenotype is established.

3. Does the specimen fit in a genotype hierarchy on the ancestral side? Yes or no?

4. Does the specimen fit in a genotype hierarchy relative to its offspring? Yes or no?

If both answers to 3 and 4 are yes, then nested hierarchy re genotype is established.

All four (4) answers should always be yes every time without exception. But, the key is genotype (molecular) because the DNA doesn’t lie. We cannot be certain from visual morphological phenotype traits. But, once we sequence the genome, there is no uncertainty remaining.



A clade is essentially the line that begins at the trunk of the analogous tree, for common descent that would be the Tree of Life, and works it’s way from branches, limbs, to stems, and then a leaf or the extremity (representing a species) of the branching system. A taxon is a category or group. The trunk would be a taxon. The lower branches are a taxon. The higher limbs are a different taxon. It’s a rough analogy, but that’s the gist of it.


Remember that nucleic acids (DNA) are the same for all life forms, so that alone is a case for the fact that common descent goes all the way back to a single cell.

Mere similarity between organisms is not enough to support UCD. A nested classification pattern produced by a branching evolutionary tree process is much more specific than simple similarity.  A friend of mine recently showed me her challenge against UCD using a picture of the phylogeny of sports equipment:

Cladogram of sports ballsI pointed out to her that her argument is a false analogy. Classifying physical items will not result in an objective nested hierarchy.

For example, it is impossible to objectively classify in nested hierarchies the elements in the Periodic Table, planets in our Solar System, books in a library, cars, boats, furniture, buildings, or any inanimate object. Non-life forms do not reproduce, and therefore do not pass forward inherited traits from ancestors.

The point in using the balls used in popular sports attempts to argue that it is trivial to classify anything subjectively in a hierarchical manner.  The illustration of the sports balls showed that classification is entirely subjective. But, this is not true with biological heredity. We KNOW from DNA whether or not a life form is the parent of another life form!

With inanimate objects, like cars, they could be classified hierarchically, but it would be subjective, not objective classification. Perhaps the cars would be organized by color, and then by manufacturer. Or, another way would be to classify them by year of make or size, and then color. So, non-living items cannot be classified using a hierarchy because the system is entirely subjective. But, life forms and languages are different.

In contrast to being subjective like cars, human languages do have common ancestors and are derived by descent with modification.  Nobody would reasonably argue that Spanish should be categorized with German instead of with Portuguese. Like life forms, languages fall into objective nested hierarchies.  Because of these facts, a cladistic analysis of sports equipment will not produce a unique, consistent, well-supported tree that displays nested hierarchies.

Carl Linnaeus, the famous Swedish botanist, physician, and zoologist, is known for being the man who laid the foundations for the modern biological naming scheme of binomial nomenclature. When Linnaeus invented the classification system for biology, he discovered the objective hierarchical classification of living organisms.   He is often called the father of taxonomy.  Linnaeus also tried to classify rocks and minerals hierarchically, but his efforts failed because the nested hierarchy of non-biological items was entirely subjective.

“DNA doesn’t lie.”

Hierarchical classifications for inanimate objects don’t work for the very reason that unlike organisms, rocks and minerals do not evolve by descent with modification from common ancestors. It is this inheritance of traits that provides an objective way to classify life forms, and it is nearly impossible for the results to be corrupted by humans because DNA doesn’t lie.

Caveat: Testing nested hierarchy for life forms works, and it confirms common descent. There is a ton of scientific literature on this topic, and it all supports common descent and Darwin’s predictions. Again, there is no such thing as a design-inspired prediction for why life forms all conform to nested hierarchy. There is only one reason why they do: Universal Common Ancestry.

The point with languages is that they can be classified objectively to fall within nested hierarchies because they are inherited and passed on by descent with modification. No one is claiming that languages have a universal common ancestor, even if it they do, because its beside the point.

In this paper, Kiyotaka Takishita et al (2011), “Lateral transfer of tetrahymanol-synthesizing genes has allowed multiple diverse eukaryote lineages to independently adapt to environments without oxygen,” published in Biology Direct, the phylogenies of unicellular eukaryotes are examined to ascertain how they acquire sterols from bacteria in low oxygen environments. In order to answer the question, the researchers had to construct a detailed cladogram for their analysis. My point here is that DNA doesn’t lie. All life forms fall within a nested hierarchy, and there is no paper that exists in scientific literature that found a life form that does not conform to a nested hierarchy.

CladogramThe prediction in this instance is that if evolution (as first observed by Charles Darwin) occurs, then all life might have descended from a common ancestor. This is not only a hypothesis, but is the basis for the Scientific Theory of Universal Common Descent (UCD).

There is only one way I know of to falsify the theory of UCD, and that is to produce a life form that does not conform to nested hierarchy. All it takes is one.


One person I recently spoke to regarding this issue suggested that a comb jelly appears to defy common descent.  He presented me this paper published in Nature in support of his view.  The paper is entitled, “The ctenophore genome and the evolutionary origins of neural systems” (Leonid L. Moroz, et al, 2014). Comb jellies might appear to be misclassified and not conform to a hierarchy, but phylogenetically they fit just fine.

There does seem to be an illusion going back to the early Cambrian period that the phenotype of life forms do not fall within a nested hierarchy. But, their genotypes still do. The fact that extremely different body types emerge in the Cambrian might visually suggest they do not conform to a nested hierarchy, the molecular analysis tells a much different story and confirms that they do.

To oppose my position, all that is necessary is for someone to produce one solitary paper published in a science journal that shows the claim for UCD to be false. Once a molecular analysis and the phylogenies are charted on a cladogram, all life forms, I repeat all life forms conform to nested hierarchies, and there is not one single exception. If there is, I am not aware of the paper.

In regarding the comb jelly discussed in Moroz (2014), if someone desires to submit the comb jelly does not fit within a nested hierarchy, there is no content in this paper that supports this view.

For example, From Figure 3 in the article,

“Predicted scope of gene loss (blue numbers; for example, −4,952 in Placozoa) from the common metazoan ancestor. Red and green numbers indicate genes shared between bilaterians and ctenophores (7,771), as well as between ctenophores and other eukaryotic lineages sister to animals, respectively. Text on tree indicates emergence of complex animal traits and gene families.”

The authors concluded common ancestry and ascribe their surprise regarding the comb jelly to convergence, which has nothing to do with common ancestry.

The article refers to and assumes common metazoan ancestry.  The common ancestry of the comb jelly is never once questioned in the paper.  The article only ascribes the new so-called genetic blueprint to convergence.  The paper both refers to and assumes common ancestry several times, and even draws up a cladogram for our convenience to more readily understand it’s phylogeny, which is based upon common descent.

The paper repeatedly affirms the common ancestry of the comb jelly, and only promotes a case for convergent evolution. It is an excellent study of phylogeny of the comb jelly. There is nothing about the comb jelly that defies nested hierarchy. If there was, common descent would be falsified.

Universal Common Descent (UCD) is a scientific theory that all life forms descended from a single common ancestor.  The theory is falsified by demonstrating the node (Figure 1) of any life form upon examination of its phylogeny does not fit within an objective nested hierarchy based upon inheritance of traits from one generation to the next via successive modifications. If someone desires to falsify UCD all they need to do is just present the paper that identifies such a life form. Of course, if such a paper existed the author would be famous.

Any other evidence regardless of how much merit it might have to indicate serious issues with UCD does nothing to falsify UCD. If this claim is challenged, please (a) explain to me why, and (b) show me the scientific literature that confirms the assertion.


One paper that is often cited to W. Ford Doolittle, “Phylogenetic Classification and the Universal Tree,” Science 25 June 1999. This is Doolittle (1999). I already cited Baldauf, S. L., Roger, A. J., Wenk-Siefert, I., and Doolittle, W. F. (2000) above. Doolittle is very optimistic about Common Descent, and does nothing to discourage its falsification. In fact, the whole point of Doolittle’s work is to improve on the methodology so that future experimentation increases the reliability of the results. 

In figure 3 of the paper, Doolittle presents a drawing as to what the problems are during the early stages of the emergence of life:

reticulated treeIn Doolittle 1999, there are arguments fully discussed as to what the problems are regarding lateral gene transfer (LGT), and how it distorts the earlier history of life.  But, once solving for the LGT, the rest of the tree branches off as would be expected. 

Thanks to lateral gene transfer, taxonomists have identified 25 genetic codes all of which have their own operating systems, so to speak, for the major phyla and higher taxa classifications of life. They’re also called mitochondrial codes, and are non-standard to other clades in the phylogenetic tree of life.

The question is, do any of these 25 non-standard codes weaken the claim for a common ancestor for all life on earth? The answer would be no because the existence of non-standard codes offers no support for a ‘multiple origins’ view of life on earth.

Lineages that exhibit these 25 “variants” as they are also often called are clearly and unambiguously related to organisms that use the original universal code that reverts back to the hypothetical LUCA. The 25 variant branches of life are distributed as small ‘twigs’ super early at the very dawn of life within the evolutionary tree of life. There is a diagram of this in my essay. I will provide it below for your convenience.

Anyone is welcome to disagree, but to do so requires the inference that, for example, certain groups of ciliates evolved entirely separately from the rest of life, including other types of ciliates. The hypothesis that the 25 mitochondrial codes are originally unique and independent to a LUCA is simply hypothetical, and there is no paper I am aware of that supports this conjecture. There are common descent denying creationists who argue this is so, but the claim is untenable and absent in the scientific literature.

Although correct, the criticism that the data breaks down the tree does nothing to falsify universal common descent.  In order to falsify UCD one must show that a life form exists that does not conform to a nested hierarchy.  

The fact that there are gaps in the tree, or that the tree is incomplete, or that there is missing phylogenetic information, or that there are other methodological problems that must be solved does not change the fact that the theory remains falsifiable. And, I already submitted the simple criteria for falsification, and it has nothing to do with seeing how complete one can construct the Tree of Life.

The abstract provides an optimistic summary of the findings in Doolittle 1999:

“Molecular phylogeneticists will have failed to find the “true tree,” not because their methods are inadequate or because they have chosen the wrong genes, but because the history of life cannot properly be represented as a tree. However, taxonomies based on molecular sequences will remain indispensable, and understanding of the evolutionary process will ultimately be enriched, not impoverished.”

There many challenges to universal common descent, but to date there is no life form that has been found that defies conforming to nested hierarchy.  Some of challenges to common descent relate to early when life emerged, such as this 2006 paper published in Genome Biology, authored by Tal Dagan and William Martin, entitled, “The Tree of One Percent.”

Similar problems are addressed in Doolittle 2006, The paper reads,

“However, there is no independent evidence that the natural order is an inclusive hierarchy, and incorporation of prokaryotes into the TOL is especially problematic. The only data sets from which we might construct a universal hierarchy including prokaryotes, the sequences of genes, often disagree and can seldom be proven to agree. Hierarchical structure can always be imposed on or extracted from such data sets by algorithms designed to do so, but at its base the universal TOL rests on an unproven assumption about pattern that, given what we know about process, is unlikely to be broadly true”

That paper does discuss hierarchy at length, but there’s nothing in it that indicates its findings falsify common descent.  The article essentially makes the same points I made above when I explained the difference between an subjective nested hierarchy and an objective nested hierarchy in reference to the hierarchy of sports equipment.   This paper actually supports common descent.


As a scientific theory, UCD is tested because that is what we’re supposed to do in science. We’re supposed to test theories. Of course UCD is going to be tested. Of course UCD is going to be challenged. Of course UCD is going to have some serious issues that are researched, analyzed, and discussed in the scientific literature. But, that doesn’t mean that UCD was falsified.

This information should not alarm anyone who favors the scientific theory of intelligent design (ID).  ID scientists like Michael Behe accept common descent. I have no problem with it, and it really doesn’t have much bearing on ID one way or the other. Since the paleontologists, taxonomists, and molecular biologists who specialize in studying phylogenies accept univ. common descent as being confirmed, and not falsified, I have very little difficulty concurring. That doesn’t mean I am not aware of some of the weaknesses with the conjecture of common descent.

Posted in Uncategorized | 3 Comments


Intelligent Design is defined by the Discovery Institute as:


The classic definition of ID Theory employs the term, “intelligent cause.” Upon studying William Dembski’s work, which defines the ID Theory understanding of “intelligent cause” using Information Theory and mathematical theorems, I rephrased the term “intelligent cause” to be “artificial intervention,” and have written extensively on the subject for why it’s a better term.

Both terms are synonymous, however phrasing the term the way I do helps the reader to more readily understand the theory of intelligent design in the context of scientific reasoning.  In his book, “The Design Inference” (1998), Dembski shows how design = specified complexity = complex specified information.  In “No Free Lunch” (2002), he expands upon the role of “intelligence.”

The idea of “intelligence” is nothing much other than the default word to mean something that is other than a product of known natural processes.  Design theorists predict there are yet additional discoveries to be made of mechanisms for design that supplement evolution and work in conjunction with evolution.  Another term to mean just the opposite of natural selection is artificial selection.

There are two kinds of selection, natural selection and artificial selection.


Charles Darwin, famous for his book, “Origin of Species,” wrote about the difference between natural selection and artificial selection in other literature he wrote on dog breeding.  Charles Darwin coined the term, “natural selection.” Darwin observed dog breeding. He recognized that dog breeders carefully selected dogs with certain traits to mate with certain others to enhance favorable characteristics for purposes of winning dog shows. Darwin also wrote a book 13 years after Origin of Species entitled, “The Expression of the Emotions in Man and Animals.” The illustrations he used of dogs can be viewed here.

I wrote an essay about Darwin’s observations concerning dog breeding here.   Essentially, artificial selection = intelligence in that the terms are interchangeable in the context of ID Theory. I didn’t want to use either term in the definition of ID, so I chose a phrase that carries with it the identical meaning, “artificial intervention.”

Artificial intervention contrasts natural selection.  The inspiration that led to Charles Darwin coining the term, “natural selection” was when he observed dog breeding.  Darwin saw how dog breeders select specific dogs to mate in order to enhance the most favorable characteristics to win dog shows.  This is the moment when he realized that what happens in the wild is a selection process that is entirely natural without involvement of any other kind of discretion factored in as a variable.

The moment any artificial action interrupts or interferes with natural processes, then natural processes have been corrupted. ID Theory holds that an information leak, which we call CSI, entered in the development of the original cell via some artificial source. It could be panspermia; quantum particles, quantum biology, natural genetic engineering (NGE), or other conjectures.  This is ID Theory by definition of ID Theory. All processes remain natural as before, except an artificial intervention took place, which could have been a one-time event (the front-loading conjecture) or is ongoing (e.g., NGE).


Panspermia is an example of artificial intervention.

One example of artificial intervention would be panspermia.  The reason why is because the Earth’s biosphere is a closed system.  The concept of abiogenesis is based upon life originating on Earth.  The famous Stanley Miller and Harold Urey experiments attempted to replicate the conditions of the primordial world believed to be on Earth.  With abiogenesis, it is a conjecture to explain how life naturally arose from non-life on Earth, assuming such an event ever occurred on this planet.

Panspermia, on the other hand, is an artificial intervention that transports life from a different source to earth.  While panspermia does not necessarily reflect intelligence, it is still intelligent-like in that an intelligent agent might consider colonizing planet earth by transporting life to our planet from a different location in the universe.

I have been challenged much on this reasoning with the objection being that artificial selection was understood by Darwin to be that of human intelligence.  I can provide many arguments that would indicate there are perfectly acceptable natural mechanisms, entirely non-Darwinian, that due to the fact that they are independent of natural selection, therefore they have to be “artificial selection” by default even if not the product of human intelligence. A good example would be an extraterrestrial intervention.  So, this objection doesn’t concern me.

The objection that does concern me is when someone confuses the ID understanding of “intelligence” to be non-natural. This is where I agree with Richard Dawkins when he writes the “intelligence” of ID Theory is likely entirely illusory (

This is yet another reason I prefer the term artificial intervention because it leaves room for the conventional understanding of intelligence, and yet remains open to other natural mechanisms that remain to be discovered, and sets these in contrast to already existing known natural processes that are essentially Darwinian. The term “Darwinian” of course means development by means of gradual, step-by-step, successive successive modifications one small change at a time.

“Artificial Intervention” is a term I came up with four years ago to essentially be meant as a synonym to the Intelligent Design phrase, “Intelligent Cause.” When challenging the theory under critical scrutiny, ID is often ridiculed because opponents demand evidence of actual intelligence. This request misses the point.

The idea of intelligent design is not restrained to requiring actual intelligence to be behind other processes that achieve biological specified complexity independent of natural selection. Just the fact that such processes exist confirm ID Theory by definition of the theory. ID proponents expect there to be a cognitive guidance that takes place. And, that appears to be very well the case. But, the intelligence could be illusory.

Whether actual intelligence or simulated, the fact that there are other processes that defy development via gradual Darwinian step-by-step successive modifications confirms the very underlying prediction that defines the theory of Intelligent Design.

I wrote this essay to explain why intelligence does not have to be actual intelligence. Any selection that is not natural selection is artificial selection, which is based upon intelligence, and therefore Intelligent Design. However, the point is moot because William Dembski already showed using his No Free Lunch theorems that specified complexity requires intelligence. Nevertheless, this essay is an explanation to those critics who are not satisfied that ID proponents deliver when asked to provide evidence of an “intelligent cause.”

The term, “artificial intervention” is not necessary in order to define the scientific theory of intelligent design.  However, I believe it is quite useful to expand upon a deeper and more meaningful way of conveying “intelligent cause” without compromising scientific reasoning.

Posted in Uncategorized | Leave a comment

The Wedge Document

The Wedge is more than 15 years old, and was written by a man who is long retired from the ID community. What one man’s motives were are irrelevant to the science of ID Theory. Phillip Johnson is the one who came up with the Wedge document, and it is nothing other than a summary of personal motives, which have nothing directly to do with science. Johnson is 71 years old.  Johnson’s views do not reflect the younger generation of Intelligent Design (ID) Theory advocates who are partial to approaching biology from a design perspective.

Philip Johnson

Philip Johnson is the original author of the Wedge Document

Some might raise the Wedge document as evidence that there has been an ulterior motive. The Discovery Institute has a response to this as well:

The motives of Phillip Johnson are not shared by myself or other ID advocates, and do not reflect the views or position of the ID community or the Discovery Institute. This point would be similar to someone criticizing evolutionary theory because Richard Dawkins would have a biased approach to science in the fact that he is an atheist and political activist.


Some critics would contend the following:

“With regards to how this is relevant, one part of the Discovery Institute’s strategy is the slogan ‘teach the controversy.’  This slogan deliberately tries to make opponents look like they are against teaching ‘all’ of science to students.”

How can such an appeal be objectionable? This is a meaningless point of contention. I don’t know whether the slogan, “teach the controversy” does indeed “deliberately” try “to make opponents look like they are against teaching ‘all’ of science to students.” That should not be the issue.

My position is this:

1. The slogan is harmless, and should be the motto of any individual or group interested in education and advancement of science. This should be a universally accepted ideal.

2. I fully believe and am entirely convinced that the mainstream scientific community does indeed adhere to censorship, and present a one-sided and therefore distorted portrayal of the facts and empirical data.

The fact remains that Intelligent Design is a scientifically fit theory that is about information, not designers.  ID is largely based upon the work of William Dembski, in which he introduced the concept of Complex Specified Information in 1998.  In 1996, biochemist Michael Behe championed the ID-inspired hypothesis of irreducible complexity.  It’s been 17 years since Behe made the predictions of irreducible complexity in his book, “Darwin’s Black Box,” and to this day the four proposed systems to be irreducibly complex have not yet been falsified after thorough examination by molecular biologists.  Those four biochemical systems are the blood-clotting cascade, bacterial flagellum, immune system, and the cilium.

The Wedge2


Please keep in mind that my initial concerns about complaints concerning the Wedge document are primarily based upon relevance.  The Discovery Institute repealed and amended the Wedge.  Additionally, the Discovery Institute added extra commentary to clarify their present position.  It’s interesting when I am presented links to the Wedge Document, it is often the updated revised draft.  This being so, then it makes it questionable as to why critics continue quoting from the former outdated and obsolete version.  It is quite a comical obsolete argument that goes against the complainant’s credibility.  In fact, it’s an exercise of the same intellectual dishonesty that the ID antagonists is accusing of the Discovery Institute.

If one desires to criticize the views of the Discovery Institute, then such a person must use the materials that they claim are the actual present position held by the Discovery Institute and ID proponents.  I would further add:

1. ID proponents repudiate the Wedge, and distance themselves from it.

2. Mr. Johnson who authored the Wedge is retired, and that the document is obsolete.

Much about Intelligent Design theory has nothing to do with ideology or religion, such as when ID is demonstrated as an applied science “Intelligent Design” is simply just another word for Bio-Design.  Aside from biomimicry and biomimetics, other areas of science overlap into the definition of ID Theory, such as Natural Genetic Engineering, quantum biology, bioinformatics, bio-inspired nanotechnology, selective breeding, biotechnology, genetic engineering, synthetic biology, bionics, prosthetic implants, to name a few. 

The Wedge

ID antagonists claim:

“The very conception of ‘Intelligent Design’ entails just how ‘secular’ and ‘scientific’ the group tried to make their ‘theory’ sound.  It was created with Christian intentions in mind.”

This is circular reasoning, which is a logic fallacy.  The idea just restates the opening thesis argument as the conclusion, and does nothing to support the conclusion.  It also does not overcome the relevance issue as to the views maintained by the Discovery Institute and ID advocates today.

There is no evidence offered by those who raise the Wedge complaint to connect a religious or ideological motive to ID advocacy. ID Theory must be provided the same opportunity to make predictions, and test a repeatable and falsifiable design-inspired hypothesis.  If anyone has a problem with this, then they own the burden of proof to show why ID scientists are disqualified from performing the scientific method.  In other words, to reject ID on the sole basis of the Wedge document is essentially unjustifiable discrimination based upon a difference of opinion of ideological views.   At the end of the day, the only way to falsify a scientific falsifiable scientific hypothesis is to run the experimentation, and use the empirical data to confirm a claim.

Intelligent Design can be expressed as a scientific theory.  Valid scientific predictions can be premised based upon a pro ID-inspired conjecture.  The issue is whether or not ID actually conforms to the scientific method. If it does, then the objection by ID opponents is without merit and irrelevant. If ID fails in scientific reasoning, then critics simply need to demonstrate that, and then they will be vindicated.  Otherwise, ID Theory remains a perfectly valid testable and falsifiable proposition regardless of its social issues.

So far, ID critics have not made any attempt to offer one solitary scientific argument or employ scientific reasoning as to the basis of ID Theory.

Posted in Uncategorized | Tagged , | 2 Comments


This is in response to the video entitled, “Evolution CAN Increase Information (Classroom Edition).”

I agree with the basic presentation of Shannon’s work in the video, along with its evaluation of Information Theory, the Information Theory definition of “information,” bits, noise, and redundancy.  I also accept the fact that new genes evolve, as described in the video. So far, so good.I have some objections to the video, including the underlying premise, which I consider to be a strawman.

To illustrate quantifying information into bits, Shannon referenced an attempt to receive a one-way radio/telephone transmission signal.

Before I outline my dissent, here’s what I think the problem is. This is likely the result of creationists hijacking work done by ID scientists, in this case William Dembski, and arguing against evolution using flawed reasoning that misrepresents ID scientists. I have no doubt that there are creationists who could benefit by watching this video and learn how they were mistaken in raising the argument the narrative in the video refutes. But, that flawed argument misinterprets Dembski’s writings.

ID Theory is grounded upon Dembski’s development in the field of informatics, based upon Shannon’s work. Dembski took Shannon Information further, and applied mathematical theorems to develop a special and unique concept of information called COMPLEX SPECIFIED INFORMATION (CSI), aka “Specified Information.” I have written about CSI in several blog articles, but this one is my most thorough discussion on CSI.

I often am guilty myself of describing the weakness of evolutionary theory to be based upon the inability to increase information. In fact, my exact line that I have probably said a hundred times over the last few years goes like this:

“Unlike evolution, which explains diversity and inheritance, ID Theory best explains complexity, and how information increases in the genome of a population leading to greater specified complexity.”

I agree with the author of this video script that my general statement is so overly broad that it is vague, and easily refuted because of specific instances when new genes evolve. Of course, of those examples, Nylonase is certainly an impressive adaptation to say the least.

But, I don’t stop there at my general comment to rest my case. I am ready to continue by clarifying what I mean when I talk about “information” in the context of ID Theory. The kind of “information” we are interested is CSI, which is both complex and specified. Now, there are many instances where biological complexity is specified, but Dembski was not ready to label these “design” until the improbability reaches the Universal Probability Bound of 1 x 10^–150. Such an event is unlikely to occur by chance. This is all in Dembski’s book, “The Design Inference” (1998).

According to ID scientists, CSI occurs early, in that it’s in the very molecular machinery required to comprise the first reproducing cell already in existed when life originated. The first cell already has its own genome, its own genes, and enough bits of information up front as a given for frameshift, deletion, insertion, and duplication types of mutations to occur. The information, noise, and redundancy required to make it possible for there to be mutations is part of the initial setup.

Dembski has long argued, which is essentially the crux of the No Free Lunch theorems, that neither evolution or genetic algorithms produce CSI.  Evolution only smuggles CSI forward. Evolution is the mechanism that includes the very mutations and process to increase the information as demonstrated in the video. But, according to ID scientists, the DNA, genes, start-up information, reproduction system, RNA replication, transcription, and protein folding equipment were there from the very start, and that the bits and materials required in order for the mutations to occur were front-loaded in advance. Evolution only carries it forward into fruition in the phenotype.  I discuss Dembski’s No Free Lunch more fully here.

DNA binary

Dembski wrote:

“Consider a spy who needs to determine the intentions of an enemy—whether that enemy intends to go to war or preserve the peace. The spy agrees with headquarters about what signal will indicate war and what signal will indicate peace. Let’s imagine that the spy will send headquarters a radio transmission and that each transmission takes the form of a bit string (i.e., a sequence of 0s and 1s ). The spy and headquarters might therefore agree that 0 means war and 1 means peace. But because noise along the communication channel might flip a 0 to a 1 and vice versa, it might be good to have some redundancy in the transmission. Thus the spy and headquarter s might agree that 000 represents war and 111 peace and that anything else will be regarded as a garbled transmission. Or perhaps they will agree to let 0 represent a dot and 1 a dash and let the spy communicate via Morse code in plain English whether the enemy plans to go to war or maintain peace.

“This example illustrates how information, in the sense of meaning, can remain constant whereas the vehicle for representing and transmitting this information can vary. In ordinary life we are concerned with meaning. If we are at headquarters, we want to know whether we’re going to war or staying at peace. Yet from the vantage of mathematical information theory, the only thing that’s important here is the mathematical properties of the linguistic expressions we use to represent the meaning. If we represent war with 000 as opposed to 0, we require three times as many bits to represent war, and so from the vantage of mathematical information theory we are utilizing three times as much information. The information content of 000 is three bits whereas that of 0 is just one bit.” [Source: Information-Theoretic Design Argument]

My main objection to the script is toward the end where the narrator, Shane Killian, states that if anyone has a different understanding of the definition of information, and prefers to challenge the strict definition that “information” is a reduction in uncertainty, that their rebuttal should be outright dismissed. I personally agree with Shannon, so I don’t have a problem with it, but there are other applications in computer science, bioinformatics, electrical engineering, and a host of other academic disciplines that have their own definitions of information that emphasize different dynamics than Shannon did.

Shannon made huge contributions to these fields, but his one-way radio/telephone transmission analogy is not the only way to understand the concept of information.  Shannon discusses these concepts in his 1948 paper on Information Theory.  Moreover, even though that Shannon’s work was the basis of Dembski’s work, ID Theory relates to the complexity and specificity of information, not just in quantification of “information” alone per se.

Claude Shannon is credited as the father and discoverer of Information Theory.



As most people are aware, Michael Behe championed the design-inspired ID Theory hypothesis of Irreducible Complexity.  Michael Behe testified as an expert witness in Kitzmiller v. Dover (2005). 


Transcripts of all the testimony and proceedings of the Dover trial are available hereWhile under oath, he testified that his argument was:

“[T]hat the [scientific] literature has no detailed rigorous explanations for how complex biochemical systems could arise by a random mutation or natural selection.”

Behe was specifically referencing origin of life, molecular and cellular machinery. The cases in point were specifically the bacterial flagellum, cilia, blood-clotting cascade, and the immune system because that’s what Behe wrote about in his book, “Darwin’s Black Box” (1996).

The attorneys piled up a stack of publications regarding the evolution of the immune system just in front of Behe on the witness stand while he was under oath. Behe is criticized by anti-ID antagonists for dismissing the books.

Michael Behe testifies as an expert witness in Kitzmiller v. Dover. Illustration is by Steve Brodner, published in The New Yorker on Dec. 5, 2005.

The books were essentially how the immune system developed in vertebrates.  But, that isn’t what Intelligent Design theory is based upon. ID Theory is based upon complexity appearing at the outset of life when life first arose, and the complexity that appears during the Cambrian Explosion.

The biochemical structures Behe predicted to be irreducibly complex (bacterial flagellum, cilium, blood-clotting, and immune system) arose during the development of the first cell.  These biochemical systems occur at the molecular level in unicellular eukarya organisms, as evidenced by the fact that retroviruses are in the DNA of these most primitive life forms.  They are complex, highly conserved, and are irreducibly complex.  You can stack a mountain of books and scientific literature on top of this in re how these biochemical systems morphed from that juncture and forward into time, but that has nothing to do with the irreducible complexity of the original molecular machinery. 

The issue regarding irreducible complexity is the source of the original information that produced the irreducibly complex system in the first place.  The scientific literature on the immune system only addresses changes in the immune system after the system already existed and was in place.  For example, the Type III Secretion System Injector (T3SS) is often used to refute the irreducible complexity of flagellar bacteria.  But, the T3SS is not an evolutionary precursor of a bacteria flagella; it was derived subsequently and is evidence of a decrease in information.

The examining attorney, Eric Rothschild, stacked up those books one on top the other for courtroom theatrics.

Behe testified:

“These articles are excellent articles I assume. However, they do not address the question that I am posing. So it’s not that they aren’t good enough. It’s simply that they are addressed to a different subject.”

Those who reject ID Theory and dislike Michael Behe emphasize that since Behe is the one making the claim that the immune system is Irreducibly Complex, then Behe owns the burden to maintain a level of knowledge as what other scientists write on the subject.  It should be noted that there indeed has been a wealth of research on the immune system and the collective whole of the papers published gives us a picture of how the immune system evolved. But, the point that Behe made was there is very little knowledge available, if any, as to how the immune system first arose.

The burden was on the ACLU attorneys representing Kitzmiller to cure the defects of foundation and relevance. But, they never did. But, somehow anti-ID antagonists spin this around to make it look like somehow Behe was in the wrong here, which is entirely unfounded.  Michael Behe responded to the Dover opinion written by John E. Jones III hereOne comment in particular Behe had to say is this:

“I said in my testimony that the studies may have been fine as far as they went, but that they certainly did not present detailed, rigorous explanations for the evolution of the immune system by random mutation and natural selection — if they had, that knowledge would be reflected in more recent studies that I had had a chance to read.

In a live PowerPoint presentation, Behe had additional comments to make about how the opinion of judge John E. Jones III was not authored by the judge at all, but by an ACLU attorney.  You can see that lecture here.


Piling up a stack of books in front of a witness without notice or providing a chance to review the literature before they can provide an educated comment has no value other than courtroom theatrics.

The subject was clear that the issue was biological complexity appearing suddenly at the dawn of life. Behe had no burden to go on a fishing expedition through that material. It was up to the examining attorney to direct Behe’s attention to the specific topic and ask direct questions. But, the attorney never did that.

One of the members on the opposition for Kitzmiller is Nicholas Matzke, who is employed by the NCSEThe NCSE was originally called upon early by the Kitzmiller plaintiffs, and later the ACLU retained to represent Kitzmiller.  Nick Matzke had been handling the evolution curriculum conflict at Dover as early as the summer of 2004.  Matzke tells the story as to how he worked with Barbara Forrest, on the history of ID, and with Kenneth Miller, their anti-Behe expert.  Matzke writes,

“Eric Rothschild and I knew that defense expert Michael Behe was the scientific centerpoint of the whole case — if Behe was found to be credible, then the defense had at least a chance of prevailing. But if we could debunk Behe and the “irreducible complexity” argument — the best argument that ID had — then the defense’s positive case would be sunk.”

Matzke offered guidance on the deposition questions for Michael Behe and Scott Minnich, and was present when Behe and Minnich were deposed.  When Eric Rothschild, the attorney who cross-examined Behe in the trial, flew out to Berkeley for Kevin Padian’s deposition, the NCSE discussed with Rothschild how to deal with Behe.  Matzke describes their strategy:

“One key result was convincing Rothschild that Behe’s biggest weakness was the evolution of the immune system. This developed into the “immune system episode” of the Behe cross-examination at trial, where we stacked up books and articles on the evolution of the immune system on Behe’s witness stand, and he dismissed them all with a wave of his hand.”

It should be noted that as detailed and involved as the topic on the evolution of the vertebrate immune system is, the fact remains that to this day Michael Behe’s 1996 prediction that the immune system is irreducibly complex has not yet been falsified even though it is very much falsifiable.  

Again, to repeat the point I made above in regarding the courtroom theatrics with the stacking of the pile of books in front of Behe, the burden was not on Behe to sift through the material to find evidence that would support Kitzmiller. It is up to the ACLU attorneys to direct Behe’s attention in those books and publications where complex biochemical life and the immune system first arose, and then ask questions specifically to that topic. But, since Behe was correct in that the material was not responsive to the issue in the examination, there was nothing left for the attorneys to do except engage in theatrics.