Is Consciousness Quantifiable and Computable?

Map of Neural Circuits in the Human Brain

Map of Neural Circuits in the Human Brain

This image is an actual photograph.  It is not a printout of the artwork your 9th grader brought home from school from their Apple computer software class. It’s a Map of Neural Circuits in the Human Brain performed by the Human Connectome Project.  You can view their fascinating work here.

This was a classic study confirming Intelligent Design Theory works when put to the test, as ID Theory always has withstood fierce scrutiny and testing in the past. ID Theory held up to all four of Michael Behe’s predictions of irreducible complexity in Darwin’s black Box (1996), and was confirmed many times after that. The most recent memorable occasion is when the predictions made by Jonathan Wells in his book, “The Myth of Junk DNA” was confirmed by the findings of the ENCODE Project in September 2012.

This time, it has not only been determined that information is quantifiable, which leads the way to further research started by William Dembski upon discovering “Complex Specified Information,” but now the ability to quantify consciousness itself has been realized. It’s an exciting era of the history of biology to be alive in to see this unfold in our lifetime. 

Wired Science covered this story here. It reads,

“There’s a theory, called Integrated Information Theory, developed by Giulio Tononi at the University of Wisconsin, that assigns to any one brain, or any complex system, a number — denoted by the Greek symbol of Φ — that tells you how integrated a system is, how much more the system is than the union of its parts. Φ gives you an information-theoretical measure of consciousness. Any system with integrated information different from zero has consciousness. Any integration feels like something.

It’s not that any physical system has consciousness. A black hole, a heap of sand, a bunch of isolated neurons in a dish, they’re not integrated. They have no consciousness. But complex systems do. And how much consciousness they have depends on how many connections they have and how they’re wired up.”

Tononi

Neuroscientist Giulio Tononi’s research leads the way in efforts to quantify consciousness and apply calcs factoring in consciousness in performing computations.

Giulio Tononi is a neuroscientist and psychiatrist who holds the David P. White Chair in Sleep Medicine, as well as a Distinguished Chair in Consciousness Science, at the University of Wisconsin.

More on Integrated Information Theory, http://www.nytimes.com/2010/09/21/science/21consciousness.html?pagewanted=all&_r=3&.

Research paper on the subject,
http://www.architalbiol.org/aib/article/viewFile/15056/23165867

This takes William Dembski’s work to a new level of research. The breakthrough of the instant study is that the ability to quantify consciousness into measurable units gives us yet one more way to compare the difference of specified complexity from two different examples of design.

Quantifying specified complexity is important because it assists design theorists to be able to compare one sample of design in terms of CSI with other samples. Evolution is a process where a genome, (one dataset sample) increases in specified complexity to a more sophisticated configuration, or degree of design. The only scientists who seem to care about this quantifying ability are those who work in fields related to synthetic biology, bioinformatics, and intelligent design.

With respect to ID, the ease of testing CSI is dramatically improved upon being capable to compare different quantifies of complex specified information. This was a drawback in the earlier years of Dembski’s career.

Just recently, a quantum physicist by the name of Daegene Song commented on his research related to this topic.  Song asserts that strong Artificial Intelligence is key to answering fundamental brain science questions.  I agree with him.  However, Song goes on to conclude that consciousness does not compute, and never will.

Daegene Song (PRNewsFoto/Daegene Song)

Daegene Song (PRNewsFoto/Daegene Song)

Daegene Song obtained his Ph.D. in physics from the University of Oxford and now works at Chungbuk National University in Korea as an assistant professor. To learn more about Song’s research, see his published work: D. Song, Non-computability of Consciousness, NeuroQuantology, Volume 5, pages 382~391 (2007).

This latest report about Song’s comments that consciousness does not compute is misleading. Song is a quantum physicist doing research on quantum computing. The expertise required to comment on consciousness is outside these fields of study. It is neuroscience that is the field that researches consciousness. Song is not qualified to make the assertions he made, and I am entirely not satisfied he knows what consciousness is, or knows what he is talking about. 

I have confidence in Song’s specialties, but knowledge regarding consciousness is not one of his strong suits. Until these areas of science include neuroscientists as part of the research team, the conclusions are meaningless.

Song cited only books, and most of them are outdated. The most recent source he referenced is from 2004. Song is the single sole author of this paper. There are only 9 bibliographical references, and only one paper on neuroscience cited. 

I grant credit to Song that he at least referenced Giulio Tononi’s work, but the only actual science paper he referenced of Tononi is from 1998. I can understand needing to go back to 1998 to be able to cite a paper published in the prestigious Science journal, but when this is the only neuroscience documentation in Song’s final analysis then this is not just poor scholarship, but a lack of relevant research altogether.

This is very substandard scholarship even for a paper that dates back to 2008. Song’s work goes back to 2008 (last revision of his 2007 paper). We know far more now on this subject than we did in 2008.

Tononi does lead the way in quantifying consciousness, and inspiring research in the field of neuroscience to indeed apply the math and calculations. Based upon this paper by Song, we have no idea whether the unit of consciousness represented by theta θ will apply to quantum computing or not. This paper provides no new information than what we already knew before; his paper is outdated information. 

Even if Song is correct, his opinion only applies to quantum computing. Quantum computing is more closely related to computer science than it is quantum physics. Song’s unqualified and layman’s concept of what he thinks consciousness might be is inapplicable to actual research on the topic in neuroscience.

Until the correct team does this research right, with an increased effort to apply more comprehensive study of updated sources, then we will never know the correct conclusions of this report. 

Without at least one neuroscientist on that research team, then the study is a sham and it would be highly unlikely to pass peer review in any reputable science journal, of which this paper never did. NeuroQuantology is a very low impact journal. According to the 2013 Journal Citation Reports, the journal NeuroQuantology had an impact factor of 0.439, ranking it 240th out of 251 journals in the category “Neuroscience.”

The problem is you cannot arrive at that conclusion the MATH says that NEUROSCIENCE will NEVER be able to achieve it’s GOAL because that is a determination to be made by neuroscientists, not quantum physicists. It is irrelevant that consciousness is not produced by the brain. I hold that neither is intelligence limited to being produced by a brain either. The approach Song used is not consistent with Tononi’s work, and is based upon outdated research and unnecessary variables. 

To give you an idea of how futile Song’s opinion is, it would be like the Miller-Urey experiment. Song put his ingredients of what he thinks consciousness would look like expressed mathematically, ran the math, and the numbers failed to compute. That means nothing.

We did that all the time in first year engineering school. We kept on trying to write programs until we finally got them to run. Song made an attempt in 2007, and then arrived at a false conclusion. Moreover, the conclusion he arrived at is so far removed and different than the actual advances being made in neuroscience today regarding the topic that it is nonsense to compare them. It would be like trying to call a dolphin a fish because it looks like one.

If the math doesn’t work, you keep running calcs until it does work. That’s what Paul Davies is doing with origin of life studies at Ariz. St. Univ based upon information theory/bioinformatics; same field as William Dembski’s work. That’s what the string theorists are doing, too. You don’t quit and say it will never happen, that is not how science works.

It should be noted that Song never set out to vindicate efforts that consciousness can be factored into math calculations. He approached the task from an artificial intelligence application in quantum computing. The results Song desired to obtain was whether it is possible to develop some kind of AI software that inputs artificial consciousness into similar programming. His work ran into what he perceives to be a dead end. Maybe it is a dead end, we don’t know yet. 

It is a dead end as far as Song is concerned, but he based this conclusion on incomplete data from the field of neursoscience. I don’t criticize him for this; neursoscience is not Song’s area of expertise.

Consciousness is not the specialty of Song. He had to reference work done in neuroscience by expert researcher G. Tononi who still leads the work in quantifying consciousness to this very day. Song’s research was in 2008. There have been new discoveries and breakthroughs by G. Tononi since then.  As I mentioned above, this research is far from be over and complete. And, the most important point of all is that these two lines of research are so vastly different they cannot be compared.

Song found his efforts ran into limits, but only applied to quantum computing. That doesn’t stop new predictions being modified and researched. And, nothing about Song falsified subsequent research done by Tononi that contradicts the sensational title of the article that “Consciousness does not compute and never will.”  It appears to be a typical journalistic gimmick to capture the attention of readers and draw them to the article.  I have my suspicions that Song intended to represent the extremist opinion that is conveyed in the title of the article. 

I have several illustrations about math computations to make my point.  For example, the math that supports the Big Bang, math that supports string theory, math that supports origin of life predictions, and the math that supports Einstein’s theory of gravity.

It took decades for the math to be resolved in support of the Big Bang. While string theory has been around many years, the theorists are still working on the math to this very day. 

Origin of life conjectures have also been around decades, and scientists still have not been able to get the math to work, even though there most likely is a solution. If the math doesn’t run, then no mechanism is discoverable. If the math eventually does work for one of the models, then finally this would yield evidence that a mechanism exists and is discoverable.

Newton’s math regarding gravity was falsified by Einstein. It is possible Einstein’s math might be falsified yet again in the future We don’t know yet whether those calcs apply in black holes.

Regardless of which analogy I used to compare the math calcs used to support a scientific conjecture, each instance was the same in that it took scores of years to perfect the math, and working the computations is endless because new scientific discoveries add more variables to the equations. These are rigorous computations often involving several pages of proofs, theorems, etc. Not all the formulas and theorems are absolute. 

Song appears to oversimplify these vastly different research areas involving entirely different mathematical approaches and calcs to make it sound like there is one and only one possible short and sweet computation to solve.  His comments could be taken to suggest that the moment that this one solution to the math problem is obtained, that there is nothing left to do as if this is some ultimate invincible and irrefutable conclusion. That is a science stopper. That is not how science works. Scientists make new predictions, and investigate new methods to surmount barriers and hurdles. 

If the arch bridge cannot be built, then try beams. If beams don’t work, then try cables. If a cable bridge cannot be built, then try a trestle bridge. If that doesn’t work, perhaps a suspension bridge will work. Each bridge will have different engineering calcs. Each approach changes the math, as the entire math computation is completely different. 

Someone who could take Song’s comments to falsify Tononi make it sound like I am saying if you try to add 2 + 2 + 3 for a long enough time then maybe someday you might get a different answer than 7 after the millionth try. I never represented any kind of absurdity of the like. There are hundreds if not thousands of ways to express consciousness mathematically. When one treatment doesn’t fit, then try a different approach, which has an entirely different application and computation to resolve. I never suggested keep trying to solve the same identical math problem as before.

Song’s paper was last revised in 2008. Most of Tononi’s breakthrough discoveries in quantifying consciousness is more recent research. I made this point several times above already. You can’t have an older paper refute a more recent paper; it’s the other way around.

Although I already linked to it above, here is the Tononi paper again. Keep in mind that in order for Song to have a paper to cite from neuroscience, Song’s only neuroscience paper referenced was Tononi’s work from 1998. And, I repeat, Song’s paper was published in 2007. This paper from G. Tononi is 2012.

While some people might conclude otherwise, Mr. Song no way, no how ran into a roadblock for being able to compute consciousness. His computation was one of an infinite amount of approaches to consciousness in the field of artificial intelligence. Moreover, his research has nothing to do with the advances made in neuroscience. This is an argument about apples and oranges. These different fields have different goals, objectives, methodologies, and areas of expertise.

Song’s work attempted to perform a computation, one of an infinite amount of approaches to MIMIC consciousness in the field of artificial intelligence.  That objective is an ENTIRELY DIFFERENT GOAL that the areas of research in neuroscience that investigate the role of neurons and many other topics related to consciousness and intelligence. 

On the ID-Theoretical Biology, there is not nearly one day that goes by where there are posts that are connected with these subjects. Yes, AI is an extremely important field of study of interest to Intelligent Design, but AI is an applied science. The theories it is based upon comes from other academic disciplines such as computer science.

Song’s focus was to achieve some watershed breakthrough for AI in the area of quantum computing. That is an isolated direction of countless leads to advance these study areas. To choose one road and arrive at a dead end is meaningless. Just turn around, go back to the intersection and follow a new direction of research. That is how science works.

SUMMARY AND RELEVANCE TO INTELLIGENT DESIGN:

I see ID Theory as described and defined by the Discovery Institute to be an entirely fit scientific theory of which although it is falsifiable, it has not yet been falsified after 19 years.

I could assert several plausible arguments for why I hold ID Theory to be legitimate science. But, the relevant argument I adhere to, specifically, is that it is a DESIGN-inspired prediction based upon ID Theory that an INTELLIGENT CAUSE is a BETTER EXPLANATION for the origin and diversity of life than natural selection. Is that true or false? Is this testable or not?  Many conclude intelligent design is not testable. I say yes, it is testable. I assert this prediction based upon advances being made in neuroscience where there is surfacing a workable and testable definition of intelligence in the SCIENTIFIC LITERATURE thanks to that field. 

I insist that these processes of natural genetic engineering (http://www.huffingtonpost.com/james-a-shapiro/epigenetics-iii-epigeneti_b_1683713.html; http://shapiro.bsd.uchicago.edu/2006.ExeterMeeting.pdf) and cell cognition are intelligent-like, and it will just be a matter of time that studies in fields related to neuroscience will be an ultimate deciding factor in determining whether the multiple coordinated mutation events that change allele frequencies in biological populations, and the processes that cause them, are being performed by the work of an identifiable and observable intelligent agency. This prediction remains to be seen. More research is required.

Just as INTELLIGENCE itself is a study of interest to design theorists so likewise is consciousness. These are not the only target topics of study. Add to the list other properties associated with intelligence aside from consciousness. Add decision-making, communications networks, problem-solving, self-awareness, cognitive activity, and the list goes on. 

It is the field of neuroscience that leads the way in these study areas. There are about 250 science journals related to the field of neuroscience. It is a significant and growing area of scientific research.

I just took one prediction out of countless thousands. I simply chose as an example the work of Giulio Tononi, who has made promising gains in the area of quantifying consciousness.

I do not have a problem for anyone being a fan of ID because they feel ID validates their personal theology. But, what ID means philosophically beyond the scope of science has nothing to do with the contribution ID makes to actual science. 

What I love about ID Theory is there really does exist a condition called irreducible complexity, and no scientist can take credit for the discovery because Michael Behe beat them to it. There really does exist a mathematical definition of design, but no information theorist or bioinfomatics expert can take credit for it because William Dembski already scooped them. 

CONCLUSION:

Mr. Song cited Tononi 1998 in order to support the conclusions in Song 2007, a paper published in Nature. Tononi’s work is published at least to 2012.  As such, Song never falsified Tononi.  This is a very much ALIVE and ACTIVE research field.

Posted in Uncategorized | Tagged , , | Leave a comment

Eight Reasons Why Intelligent Design is Science

Here is a list of 8 bullet topics featuring evidence for testable means that would support Intelligent Design Theory:

1. Complex Specified Information (CSI), aka specified complexity; William Dembski’s No Free Lunch theorems.

sand-Sculpting

Sand Sculpture

A sand sculpture is a good example of CSI.  At some juncture there is enough information evident from some event that we can deduce intelligent design.  It is a safe conclusion that this sand formation is not the product of wind.

2. Michael Behe’s testable predictions regarding Irreducible Complexity.  Molecular biologist Jonathan McLatchie also wrote an essay on this subject.  An irreducibly complex system is one that (a) the removal of a protein renders the molecular machine inoperable, and (b) the biochemical structure has no stepwise evolutionary pathway.  Michael Behe further describes the condition:

“An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.” (A Response to Critics of Darwin’s Black Box, by Michael Behe, PCID, Volume 1.1, January February March, 2002.  Source: http://www.ideacenter.org/contentmgr/showdetails.php/id/840).

3. Quantum Biology.  Consider this paper.

4. The work of James Shapiro.  One of his discoveries is Natural Genetic Engineering (NGE).  Shapiro wrote this essay to summarize NGE hereThis is a more comprehensive paper on the subject.

 

Origin of Life

Origin of Life requires the formation of many molecular machines to be installed and running before the existence of the first living and reproducing cell.

5. Origin of Life research based upon Information Theory.  An example is the work done by Paul Davies at Arizona State University.  Here is the press release for their research.  This is their paper, “Algorithmic Origins of Life.” It is an approach to abiogenesis, but from an Information Theory approach instead of chemical evolution.  Live Science also reported on the study.

6.  A special class of natural genetic engineering is cell cognition when the genome of a population is modified by an intelligent cause instead of an unguided process like natural selection or horizontal gene transfer.  Cell cognition refers to the decision-making processes that occur at the cellular level.

7. The work of William Dembski in the field of bioinformatics.

Cilium

The image is a cilium. It was predicted by Michael Behe in 1996 to be irreducibly complex, and the product of multiple coordinated mutations that occurred simultaneously.

8. Design-inspired predictions based upon there being multiple simultaneous mutation events as opposed to gradual successive modifications one mutation at a time.  Design theorists call these multiple coordinated events.  These break the standard successive stepwise modifications that natural selection is based upon.  Some multiple coordinated mutation events result in irreducible complex molecular machinery.

Intelligent Design is a scientific theory, which the Discovery Institute states, “holds that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection.”  Source: http://www.intelligentdesign.org/

Posted in Uncategorized | 3 Comments

UNIVERSAL COMMON DESCENT

THE SCIENTIFIC METHOD IS BASED UPON TESTING A FALSIFIABLE HYPOTHESIS:

This essay sets a case in favor of the scientific theory of universal common ancestry.  One of the most dubious challenges to universal common descent I have reviewed is Takahiro Yonezawa and Masami Hasegawa, “Some Problems in Proving the Existence of the Universal Common Ancestor of Life on Earth,” The Scientific World Journal, 2011.  While there is nothing wrong with the data and points raised in this article, it is not the objective of science to “prove” a theory.  Also, the objective of identifying the universal common ancestor is not the the focus of the theory of universal common descent.

The scientific method is based upon testing a falsifiable hypothesis.  In science, the researchers do not experiment to “prove” theories, they test an hypothesis in order to falsify the prediction.  All we can do is continue to test gravity to determine if Einstein’s predictions were correct. We can never “prove” Einstein was right because his equations might not work everywhere in the universe, such as inside a black hole.

When an experiment fails to falsify the hypothesis, all we can conclude is that the theory is confirmed one more time. But, the theory is never ultimately proven. If it were possible to prove a theory to be ultimately true, like a law of physics, then it is not a scientific theory because a theory or hypothesis must be falsifiable.

The theory of UCD is challenged with formal research by multiple biology and biochemistry departments around the world. There is a substantial amount of scientific literature on this  area of research.  The fact that after all this time the proposition of UCD has not been falsified is a persuasive case supporting an argument the claim has merit.   That’s all science can do.

I make this point because when we explore controversial topics far too often some individuals make erroneous objections, such as requiring empirical data to “prove” some conjecture.  That is not how science works.  All the scientific method can do is demonstrate a prediction is false, but science can never prove a theory to be absolutely true.

Having said that, there are scientists who nevertheless attempt to construct a complete Tree of Life.  This is done in an ambitious attempt to “prove” the theory is true, even to the fanciful hopes of identifying the actual universal common ancestor.   Much of the attacks on the theory of common descent are criticisms noting the incompleteness of the data.  But, an incomplete tree does not falsify the theory.

This is important to understand because there is no attempt being made here to prove universal common descent (UCD).  All that is going to be shown here is that the UCD as a scientific theory has not been falsified, and remains an entirely solid theory regardless as to whether UCD is actually true or not.

IS UNIVERSAL COMMON ANCESTRY FALSIFIABLE?

What would it take to prove universal common descent false?

Common ancestry would be falsified if we discovered a form of life that was not related to all other life forms. For example, finding a life form that does not have the nucleic acids (DNA and RNA) would falsify the theory. Other ways to falsify Univ. Common Descent would be:

• If someone found a unicorn, that would falsify universal common descent.

• If someone found a Precambrian rabbit would likely falsify universal common descent.

• If it could be shown mutations are not inherited by successive generations.

One common misunderstanding that people have about science is they have this idea that science somehow proves certain predictions to be correct.

All life forms fall within nested hierarchy. Of the hundreds of thousands of specimens that have been applied testing, every single one of them fall within nested hierarchy, or their evolution phylogenetic tree is still unknown and not sequenced yet.

SCIENCE PAPERS THAT SUPPORT UNIVERSAL COMMON DESCENT:

Here is just a tip of the iceberg of science papers that indicated the validity of the UCD:

• Steel, Mike; Penny, David (2010). “Origins of life: Common ancestry put to the test“. Nature 465 (7295): 168–9.

• A formal test of the theory of universal common ancestry (13 May 2010). “A formal test of the theory of universal common ancestry.” Nature 465 (7295): 219–222.

• Glansdorff, N; Xu, Y; Labedan, B (2008). “The last universal common ancestor: emergence, constitution and genetic legacy of an elusive forerunner.” Biology direct 3 (1): 29.

Céline Brochier, Eric Bapteste, David Moreira and Hervé Philipp, “Eubacterial phylogeny based on translational apparatus proteins,” TRENDS in Genetics Vol.18 No.1 January 2002.

• Baldauf, S. L., Roger, A. J., Wenk-Siefert, I., and Doolittle, W. F. (2000) “A kingdom-level phylogeny of eukaryotes based on combined protein data.” Science 290: 972-7.

• Brown, J. R., Douady, C. J., Italia, M. J., Marshall, W. E., and Stanhope, M. J. (2001) “Universal trees based on large combined protein sequence data sets.” Nature Genetics 28: 281-285.

The above are often cited in support of Univ. Common Descent. For anyone to suggest these papers have been overturned or outdated requires documentation.

Darwin's Sketch of a Cladogram

Darwin’s First Sketch of a Cladogram

NESTED HIERARCHIES AND BASIC PHYLOGENETICS:

A logical prediction that would be inspired by common descent is that all biological development will resemble a tree, which is called the Tree of Life. Evolution then will specifically generate unique, nested, hierarchical patterns of a branching scheme. Most existing species can be organized rather easily in a nested hierarchical classification.

Figure 1. Parts of a Phylogenetic Tree
Figure 1. Parts of a Phylogenetic Tree

Figure 1 displays the various parts of a phylogenetic tree.  Nodes are where branches meet, and represent the common ancestor of all taxa beyond the node. Any life form that has reproduced has a node that will fit properly onto the phylogenetic tree. If two taxa share a closer node than either share with a third taxon, then they share a more recent ancestor.

Falsifying Common Descent:

It would be very problematic if many species were found that combined characteristics of different nested groupings. Some nonvascular plants could have seeds or flowers, like vascular plants, but they do not. Gymnosperms (e.g. conifers or pines) occasionally could be found with flowers, but they never are. Non-seed plants, like ferns, could be found with woody stems; however, only some angiosperms have woody stems.

Conceivably, some birds could have mammary glands or hair; some mammals could have feathers (they are an excellent means of insulation). Certain fish or amphibians could have differentiated or cusped teeth, but these are only characteristics of mammals.

A mix and match of characters would make it extremely difficult to objectively organize species into nested hierarchies. Unlike organisms, cars do have a mix and match of characters, and this is precisely why a nested hierarchy does not flow naturally from classification of cars.

Figure 1.  Sample Cladogram
Figure 2. Sample Cladogram

In Figure 2, we see a sample phylogenetic tree. All a scientist has to do is find a life form that does not fit the hierarchical scheme in proper order. We can reasonably expect that yeasts will not secrete maple syrup.  This model allows us the logical basis to predict that reptiles will not have mammary-like glands.  Plants won’t grow eyes or other animal-like organs. Crocs won’t grow beaver-like teeth. Humans will not have gills or tails.

Reptiles will not have external skeletons. Monkeys will not have a marsupial-like pouch. Amphib legs will not grow octopus-like suction cups.Lizards will not produce apple-like seeds. Iguanas will not exhibit bird feathers, and on it goes.

The phylogenetic tree provides a basis to falsify common descent if, for example, rose bushes grow peach-like fuzz or sponges display millipede-like legs.  We will not find any unicorns or “crockoducks.”  There should never be found any genetic sequences in a starfish that would produce spider-like fangs.  An event such as a whales developing shark-like fins would falsify common descent.

While these are all ludicrous examples in the sense that such phenomena would seemingly be impossible, the point is that any life form found with even the slightest cross-phylum, cross-family, cross-genus kind of body type would instantly falsify common descent. And, it doesn’t have to be a known physical characteristic I just listed. It could be a skeletal change in numbers of digits, ribs, or configurations.  There is an infinite number of possibilities that if such a life form was unclassifiable, the theory of universal common descent would be falsified.

The falsification doesn’t have to be anything as dramatic as these examples. It could be something like when NASA thought it has discovered a new form of life when there was thought to be an arsenic-based bacteria at California’s Mono Lake. This would have been a good candidate to see if the life form had entirely changed its genetic code. Another example would be according to UCD none of the thousands of new and previously unknown insects that are constantly being discovered will have non-nucleic acid genomes.

Certainly, if UCD is invalid, there must be life forms that exist that acquire their characteristics aside from their parents, and if this is so, their DNA will expose the anomaly. It is very clear when reviewing phylogenies that there is an unmistakeable hierarchical structure indicating ancestral lineage. And all phylogenies are like this without exception. All I ask for was there to be simply one submitted that shows a life form does not have any parents, or it’s offspring did not inherit its traits.  If such were the case, then there should be evidence of this.

METHODOLOGY OF FALSIFICATION:

For the methodology to determine nested hierarchies today, the math gets complicated in order to ensure that the results are accurate.  In this next study, as a discipline, phylogenetics is becoming transformed by a flood of molecular data. This data allows broad questions to be asked about the history of life, but also present difficult statistical and computational problems. Bayesian inference of phylogeny brings a new perspective to a number of outstanding issues in evolutionary biology, including the analysis of large phylogenetic trees and complex evolutionary models and the detection of the footprint of natural selection in DNA sequences.

As this discipline continues to be applied to molecular phylogenies, the prediction is continually confirmed, not falsified. All it would take is one occurrence for the mix and match issue to show a sequence out of order without a nested hierarchy and evolutionary theory would be falsified.

“ALL SCIENTIFIC THEORIES ARE SUPPOSED TO BE CHALLENGED”

Of course Charles Darwin’s hypothesis of UCD has been questioned.  All scientific predictions are supposed to be challenged. There’s a name for it. It’s called an experiment. The object is to falsify the hypothesis by testing it. If the hypothesis hold ups, then it is confirmed, but never proven. The best science gives you is falsification. UCD has not been falsified, but instead is extremely reliable. 

When an hypothesis is confirmed after repeated experimentation, the science community might upgrade the hypothesis to the status of a scientific theory.   A scientific theory is when an hypothesis that is continuously affirmed after substantial repeated experiments has significant explanatory power to better understand phenomena. 

Here’s another paper in support of UCD, Schenk, MF; Szendro, IG; Krug, J; de Visser, JA (Jun 2012). “Quantifying the adaptive potential of an antibiotic resistance enzyme.”  Many human diseases are not static phenomena, but are constantly evolving, such as viruses, bacteria, fungi and cancers. These pathogens evolve to be resistant to host immune defences, as well as pharmaceutical drugs. (A similar problem occurs in agriculture with pesticides).

This Schenk 2012 paper analyzes whether pathogens are evolving faster than available antibiotics, and attempts to make better predictions of the evolvability of human pathogens in order to devise strategy to slow or circumvent the destructive morphology at the molecular level. Success in this field of study is expected to save lives.

Antibiotics are an example of the necessity to apply phylogenetics in order to implement medical treatments and manufacture pharmaceutical products. Another application is demonstrating irreducible complexity. That is established by studying homologies of different phylogenies to determine whether two systems share a common ancestor. If one has no evolutionary pathway to a common ancestor, then it might be a case for irreducible complexity.

Another application is forensic science. DNA is used to solve crimes. One case involved a murder suspect being found guilty because he parked his truck under a tree. A witness saw the truck at the time of the crime took place. The suspect was linked to the crime scene because DNA from seeds that fell out of that tree into the bed of the truck positively identified the tree from no other tree in the world.

DNA allows us to positively determine ancestors, and the margin for error is infinitesimally small.

TWIN NESTED HIERARCHY:

The term “nested” refers to the confirmation of the specimen being examined as properly placed in hierarchy on both sides of reproduction, that is both in relation to its ancestors and progeny.  The term “twin” refers to the fact that nested hierarchy can be determined by both (1) genotype (molecular and genome sequencing analysis) and (2) phenotype (visual morphological variations).

We can ask these four questions:

1. Does the specimen fit in a phenotype hierarchy on the ancestral side? Yes or no?

2. Does the specimen fit in a phenotype hierarchy relative to its offspring? Yes or no?

If both answers to 1 and 2 are yes, then nested hierarchy re phenotype is established.

3. Does the specimen fit in a genotype hierarchy on the ancestral side? Yes or no?

4. Does the specimen fit in a genotype hierarchy relative to its offspring? Yes or no?

If both answers to 3 and 4 are yes, then nested hierarchy re genotype is established.

All four (4) answers should always be yes every time without exception. But, the key is genotype (molecular) because the DNA doesn’t lie. We cannot be certain from visual morphological phenotype traits. But, once we sequence the genome, there is no uncertainty remaining.

phylogenetic-tree-big

CLADES AND TAXA:

A clade is essentially the line that begins at the trunk of the analogous tree, for common descent that would be the Tree of Life, and works it’s way from branches, limbs, to stems, and then a leaf or the extremity (representing a species) of the branching system. A taxon is a category or group. The trunk would be a taxon. The lower branches are a taxon. The higher limbs are a different taxon. It’s a rough analogy, but that’s the gist of it.

THE METHODOLOGY USED TO FALSIFY COMMON DESCENT IS BASED UPON NESTED HIERARCHY:

Remember that nucleic acids (DNA) are the same for all life forms, so that alone is a case for the fact that common descent goes all the way back to a single cell.

Mere similarity between organisms is not enough to support UCD. A nested classification pattern produced by a branching evolutionary tree process is much more specific than simple similarity.  A friend of mine recently showed me her challenge against UCD using a picture of the phylogeny of sports equipment:

Cladogram of sports ballsI pointed out to her that her argument is a false analogy. Classifying physical items will not result in an objective nested hierarchy.

For example, it is impossible to objectively classify in nested hierarchies the elements in the Periodic Table, planets in our Solar System, books in a library, cars, boats, furniture, buildings, or any inanimate object. Non-life forms do not reproduce, and therefore do not pass forward inherited traits from ancestors.

The point in using the balls used in popular sports attempts to argue that it is trivial to classify anything subjectively in a hierarchical manner.  The illustration of the sports balls showed that classification is entirely subjective. But, this is not true with biological heredity. We KNOW from DNA whether or not a life form is the parent of another life form!

With inanimate objects, like cars, they could be classified hierarchically, but it would be subjective, not objective classification. Perhaps the cars would be organized by color, and then by manufacturer. Or, another way would be to classify them by year of make or size, and then color. So, non-living items cannot be classified using a hierarchy because the system is entirely subjective. But, life forms and languages are different.

In contrast to being subjective like cars, human languages do have common ancestors and are derived by descent with modification.  Nobody would reasonably argue that Spanish should be categorized with German instead of with Portuguese. Like life forms, languages fall into objective nested hierarchies.  Because of these facts, a cladistic analysis of sports equipment will not produce a unique, consistent, well-supported tree that displays nested hierarchies.

Carl Linnaeus, the famous Swedish botanist, physician, and zoologist, is known for being the man who laid the foundations for the modern biological naming scheme of binomial nomenclature. When Linnaeus invented the classification system for biology, he discovered the objective hierarchical classification of living organisms.   He is often called the father of taxonomy.  Linnaeus also tried to classify rocks and minerals hierarchically, but his efforts failed because the nested hierarchy of non-biological items was entirely subjective.

“DNA doesn’t lie.”

Hierarchical classifications for inanimate objects don’t work for the very reason that unlike organisms, rocks and minerals do not evolve by descent with modification from common ancestors. It is this inheritance of traits that provides an objective way to classify life forms, and it is nearly impossible for the results to be corrupted by humans because DNA doesn’t lie.

Caveat: Testing nested hierarchy for life forms works, and it confirms common descent. There is a ton of scientific literature on this topic, and it all supports common descent and Darwin’s predictions. Again, there is no such thing as a design-inspired prediction for why life forms all conform to nested hierarchy. There is only one reason why they do: Universal Common Ancestry.

The point with languages is that they can be classified objectively to fall within nested hierarchies because they are inherited and passed on by descent with modification. No one is claiming that languages have a universal common ancestor, even if it they do, because its beside the point.

In this paper, Kiyotaka Takishita et al (2011), “Lateral transfer of tetrahymanol-synthesizing genes has allowed multiple diverse eukaryote lineages to independently adapt to environments without oxygen,” published in Biology Direct, the phylogenies of unicellular eukaryotes are examined to ascertain how they acquire sterols from bacteria in low oxygen environments. In order to answer the question, the researchers had to construct a detailed cladogram for their analysis. My point here is that DNA doesn’t lie. All life forms fall within a nested hierarchy, and there is no paper that exists in scientific literature that found a life form that does not conform to a nested hierarchy.

CladogramThe prediction in this instance is that if evolution (as first observed by Charles Darwin) occurs, then all life might have descended from a common ancestor. This is not only a hypothesis, but is the basis for the Scientific Theory of Universal Common Descent (UCD).

There is only one way I know of to falsify the theory of UCD, and that is to produce a life form that does not conform to nested hierarchy. All it takes is one.

DOES A COMB JELLY FALSIFY COMMON DESCENT?

One person I recently spoke to regarding this issue suggested that a comb jelly appears to defy common descent.  He presented me this paper published in Nature in support of his view.  The paper is entitled, “The ctenophore genome and the evolutionary origins of neural systems” (Leonid L. Moroz, et al, 2014). Comb jellies might appear to be misclassified and not conform to a hierarchy, but phylogenetically they fit just fine.

There does seem to be an illusion going back to the early Cambrian period that the phenotype of life forms do not fall within a nested hierarchy. But, their genotypes still do. The fact that extremely different body types emerge in the Cambrian might visually suggest they do not conform to a nested hierarchy, the molecular analysis tells a much different story and confirms that they do.

To oppose my position, all that is necessary is for someone to produce one solitary paper published in a science journal that shows the claim for UCD to be false. Once a molecular analysis and the phylogenies are charted on a cladogram, all life forms, I repeat all life forms conform to nested hierarchies, and there is not one single exception. If there is, I am not aware of the paper.

In regarding the comb jelly discussed in Moroz (2014), if someone desires to submit the comb jelly does not fit within a nested hierarchy, there is no content in this paper that supports this view.

For example, From Figure 3 in the article,

“Predicted scope of gene loss (blue numbers; for example, −4,952 in Placozoa) from the common metazoan ancestor. Red and green numbers indicate genes shared between bilaterians and ctenophores (7,771), as well as between ctenophores and other eukaryotic lineages sister to animals, respectively. Text on tree indicates emergence of complex animal traits and gene families.”

The authors concluded common ancestry and ascribe their surprise regarding the comb jelly to convergence, which has nothing to do with common ancestry.

The article refers to and assumes common metazoan ancestry.  The common ancestry of the comb jelly is never once questioned in the paper.  The article only ascribes the new so-called genetic blueprint to convergence.  The paper both refers to and assumes common ancestry several times, and even draws up a cladogram for our convenience to more readily understand it’s phylogeny, which is based upon common descent.

The paper repeatedly affirms the common ancestry of the comb jelly, and only promotes a case for convergent evolution. It is an excellent study of phylogeny of the comb jelly. There is nothing about the comb jelly that defies nested hierarchy. If there was, common descent would be falsified.

Universal Common Descent (UCD) is a scientific theory that all life forms descended from a single common ancestor.  The theory is falsified by demonstrating the node (Figure 1) of any life form upon examination of its phylogeny does not fit within an objective nested hierarchy based upon inheritance of traits from one generation to the next via successive modifications. If someone desires to falsify UCD all they need to do is just present the paper that identifies such a life form. Of course, if such a paper existed the author would be famous.

Any other evidence regardless of how much merit it might have to indicate serious issues with UCD does nothing to falsify UCD. If this claim is challenged, please (a) explain to me why, and (b) show me the scientific literature that confirms the assertion.

OTHER CHALLENGES TO THE ISSUES AND PROBLEMS WITH UCD DO NOT FALSIFY DARWIN’S PREDICTION AS A SCIENTIFIC THEORY:

One paper that is often cited to W. Ford Doolittle, “Phylogenetic Classification and the Universal Tree,” Science 25 June 1999. This is Doolittle (1999). I already cited Baldauf, S. L., Roger, A. J., Wenk-Siefert, I., and Doolittle, W. F. (2000) above. Doolittle is very optimistic about Common Descent, and does nothing to discourage its falsification. In fact, the whole point of Doolittle’s work is to improve on the methodology so that future experimentation increases the reliability of the results. 

In figure 3 of the paper, Doolittle presents a drawing as to what the problems are during the early stages of the emergence of life:

reticulated treeIn Doolittle 1999, there are arguments fully discussed as to what the problems are regarding lateral gene transfer (LGT), and how it distorts the earlier history of life.  But, once solving for the LGT, the rest of the tree branches off as would be expected. 

Thanks to lateral gene transfer, taxonomists have identified 25 genetic codes all of which have their own operating systems, so to speak, for the major phyla and higher taxa classifications of life. They’re also called mitochondrial codes, and are non-standard to other clades in the phylogenetic tree of life.

The question is, do any of these 25 non-standard codes weaken the claim for a common ancestor for all life on earth? The answer would be no because the existence of non-standard codes offers no support for a ‘multiple origins’ view of life on earth.

Lineages that exhibit these 25 “variants” as they are also often called are clearly and unambiguously related to organisms that use the original universal code that reverts back to the hypothetical LUCA. The 25 variant branches of life are distributed as small ‘twigs’ super early at the very dawn of life within the evolutionary tree of life. There is a diagram of this in my essay. I will provide it below for your convenience.

Anyone is welcome to disagree, but to do so requires the inference that, for example, certain groups of ciliates evolved entirely separately from the rest of life, including other types of ciliates. The hypothesis that the 25 mitochondrial codes are originally unique and independent to a LUCA is simply hypothetical, and there is no paper I am aware of that supports this conjecture. There are common descent denying creationists who argue this is so, but the claim is untenable and absent in the scientific literature.

Although correct, the criticism that the data breaks down the tree does nothing to falsify universal common descent.  In order to falsify UCD one must show that a life form exists that does not conform to a nested hierarchy.  

The fact that there are gaps in the tree, or that the tree is incomplete, or that there is missing phylogenetic information, or that there are other methodological problems that must be solved does not change the fact that the theory remains falsifiable. And, I already submitted the simple criteria for falsification, and it has nothing to do with seeing how complete one can construct the Tree of Life.

The abstract provides an optimistic summary of the findings in Doolittle 1999:

“Molecular phylogeneticists will have failed to find the “true tree,” not because their methods are inadequate or because they have chosen the wrong genes, but because the history of life cannot properly be represented as a tree. However, taxonomies based on molecular sequences will remain indispensable, and understanding of the evolutionary process will ultimately be enriched, not impoverished.”

There many challenges to universal common descent, but to date there is no life form that has been found that defies conforming to nested hierarchy.  Some of challenges to common descent relate to early when life emerged, such as this 2006 paper published in Genome Biology, authored by Tal Dagan and William Martin, entitled, “The Tree of One Percent.”

Similar problems are addressed in Doolittle 2006, The paper reads,

“However, there is no independent evidence that the natural order is an inclusive hierarchy, and incorporation of prokaryotes into the TOL is especially problematic. The only data sets from which we might construct a universal hierarchy including prokaryotes, the sequences of genes, often disagree and can seldom be proven to agree. Hierarchical structure can always be imposed on or extracted from such data sets by algorithms designed to do so, but at its base the universal TOL rests on an unproven assumption about pattern that, given what we know about process, is unlikely to be broadly true”

That paper does discuss hierarchy at length, but there’s nothing in it that indicates its findings falsify common descent.  The article essentially makes the same points I made above when I explained the difference between an subjective nested hierarchy and an objective nested hierarchy in reference to the hierarchy of sports equipment.   This paper actually supports common descent.

CONCLUSION:

As a scientific theory, UCD is tested because that is what we’re supposed to do in science. We’re supposed to test theories. Of course UCD is going to be tested. Of course UCD is going to be challenged. Of course UCD is going to have some serious issues that are researched, analyzed, and discussed in the scientific literature. But, that doesn’t mean that UCD was falsified.

This information should not alarm anyone who favors the scientific theory of intelligent design (ID).  ID scientists like Michael Behe accept common descent. I have no problem with it, and it really doesn’t have much bearing on ID one way or the other. Since the paleontologists, taxonomists, and molecular biologists who specialize in studying phylogenies accept univ. common descent as being confirmed, and not falsified, I have very little difficulty concurring. That doesn’t mean I am not aware of some of the weaknesses with the conjecture of common descent.

Posted in Uncategorized | 13 Comments

ARTIFICIAL INTERVENTION

Intelligent Design is defined by the Discovery Institute as:

“THE THEORY OF INTELLIGENT DESIGN HOLDS THAT CERTAIN FEATURES OF THE UNIVERSE AND OF LIVING THINGS ARE BEST EXPLAINED BY AN INTELLIGENT CAUSE, NOT AN UNDIRECTED PROCESS SUCH AS NATURAL SELECTION” (http://www.intelligentdesign.org/).

The classic definition of ID Theory employs the term, “intelligent cause.” Upon studying William Dembski’s work, which defines the ID Theory understanding of “intelligent cause” using Information Theory and mathematical theorems, I rephrased the term “intelligent cause” to be “artificial intervention,” and have written extensively on the subject for why it’s a better term.

Both terms are synonymous, however phrasing the term the way I do helps the reader to more readily understand the theory of intelligent design in the context of scientific reasoning.  In his book, “The Design Inference” (1998), Dembski shows how design = specified complexity = complex specified information.  In “No Free Lunch” (2002), he expands upon the role of “intelligence.”

The idea of “intelligence” is nothing much other than the default word to mean something that is other than a product of known natural processes.  Design theorists predict there are yet additional discoveries to be made of mechanisms for design that supplement evolution and work in conjunction with evolution.  Another term to mean just the opposite of natural selection is artificial selection.

There are two kinds of selection, natural selection and artificial selection.

Selection_classification_diagram

Charles Darwin, famous for his book, “Origin of Species,” wrote about the difference between natural selection and artificial selection in other literature he wrote on dog breeding.  Charles Darwin coined the term, “natural selection.” Darwin observed dog breeding. He recognized that dog breeders carefully selected dogs with certain traits to mate with certain others to enhance favorable characteristics for purposes of winning dog shows. Darwin also wrote a book 13 years after Origin of Species entitled, “The Expression of the Emotions in Man and Animals.” The illustrations he used of dogs can be viewed here.

I wrote an essay about Darwin’s observations concerning dog breeding here.   Essentially, artificial selection = intelligence in that the terms are interchangeable in the context of ID Theory. I didn’t want to use either term in the definition of ID, so I chose a phrase that carries with it the identical meaning, “artificial intervention.”

Artificial intervention contrasts natural selection.  The inspiration that led to Charles Darwin coining the term, “natural selection” was when he observed dog breeding.  Darwin saw how dog breeders select specific dogs to mate in order to enhance the most favorable characteristics to win dog shows.  This is the moment when he realized that what happens in the wild is a selection process that is entirely natural without involvement of any other kind of discretion factored in as a variable.

The moment any artificial action interrupts or interferes with natural processes, then natural processes have been corrupted. ID Theory holds that an information leak, which we call CSI, entered in the development of the original cell via some artificial source. It could be panspermia; quantum particles, quantum biology, natural genetic engineering (NGE), or other conjectures.  This is ID Theory by definition of ID Theory. All processes remain natural as before, except an artificial intervention took place, which could have been a one-time event (the front-loading conjecture) or is ongoing (e.g., NGE).

Panspermia

Panspermia is an example of artificial intervention.

One example of artificial intervention would be panspermia.  The reason why is because the Earth’s biosphere is a closed system.  The concept of abiogenesis is based upon life originating on Earth.  The famous Stanley Miller and Harold Urey experiments attempted to replicate the conditions of the primordial world believed to be on Earth.  With abiogenesis, it is a conjecture to explain how life naturally arose from non-life on Earth, assuming such an event ever occurred on this planet.

Panspermia, on the other hand, is an artificial intervention that transports life from a different source to earth.  While panspermia does not necessarily reflect intelligence, it is still intelligent-like in that an intelligent agent might consider colonizing planet earth by transporting life to our planet from a different location in the universe.

I have been challenged much on this reasoning with the objection being that artificial selection was understood by Darwin to be that of human intelligence.  I can provide many arguments that would indicate there are perfectly acceptable natural mechanisms, entirely non-Darwinian, that due to the fact that they are independent of natural selection, therefore they have to be “artificial selection” by default even if not the product of human intelligence. A good example would be an extraterrestrial intervention.  So, this objection doesn’t concern me.

The objection that does concern me is when someone confuses the ID understanding of “intelligence” to be non-natural. This is where I agree with Richard Dawkins when he writes the “intelligence” of ID Theory is likely entirely illusory (http://www.naturalhistorymag.com/htmlsite/1105/1105_feature1_lowres.html).

This is yet another reason I prefer the term artificial intervention because it leaves room for the conventional understanding of intelligence, and yet remains open to other natural mechanisms that remain to be discovered, and sets these in contrast to already existing known natural processes that are essentially Darwinian. The term “Darwinian” of course means development by means of gradual, step-by-step, successive successive modifications one small change at a time.

“Artificial Intervention” is a term I came up with four years ago to essentially be meant as a synonym to the Intelligent Design phrase, “Intelligent Cause.” When challenging the theory under critical scrutiny, ID is often ridiculed because opponents demand evidence of actual intelligence. This request misses the point.

The idea of intelligent design is not restrained to requiring actual intelligence to be behind other processes that achieve biological specified complexity independent of natural selection. Just the fact that such processes exist confirm ID Theory by definition of the theory. ID proponents expect there to be a cognitive guidance that takes place. And, that appears to be very well the case. But, the intelligence could be illusory.

Whether actual intelligence or simulated, the fact that there are other processes that defy development via gradual Darwinian step-by-step successive modifications confirms the very underlying prediction that defines the theory of Intelligent Design.

I wrote this essay to explain why intelligence does not have to be actual intelligence. Any selection that is not natural selection is artificial selection, which is based upon intelligence, and therefore Intelligent Design. However, the point is moot because William Dembski already showed using his No Free Lunch theorems that specified complexity requires intelligence. Nevertheless, this essay is an explanation to those critics who are not satisfied that ID proponents deliver when asked to provide evidence of an “intelligent cause.”

The term, “artificial intervention” is not necessary in order to define the scientific theory of intelligent design.  However, I believe it is quite useful to expand upon a deeper and more meaningful way of conveying “intelligent cause” without compromising scientific reasoning.

Posted in Uncategorized | 2 Comments

The Wedge Document

The Wedge is more than 15 years old, and was written by a man who is long retired from the ID community. What one man’s motives were are irrelevant to the science of ID Theory. Phillip Johnson is the one who came up with the Wedge document, and it is nothing other than a summary of personal motives, which have nothing directly to do with science. Johnson is 71 years old.  Johnson’s views do not reflect the younger generation of Intelligent Design (ID) Theory advocates who are partial to approaching biology from a design perspective.

Philip Johnson

Philip Johnson is the original author of the Wedge Document

Some might raise the Wedge document as evidence that there has been an ulterior motive. The Discovery Institute has a response to this as well:

The motives of Phillip Johnson are not shared by myself or other ID advocates, and do not reflect the views or position of the ID community or the Discovery Institute. This point would be similar to someone criticizing evolutionary theory because Richard Dawkins would have a biased approach to science in the fact that he is an atheist and political activist.

I. THE WEDGE AND POLITICAL VIEWS OF THE DISCOVERY INSTITUTE ARE A SOCIAL AND IDEOLOGICAL ARGUMENT IRRELEVANT TO THE SCIENTIFIC METHOD.

Some critics would contend the following:

“With regards to how this is relevant, one part of the Discovery Institute’s strategy is the slogan ‘teach the controversy.’  This slogan deliberately tries to make opponents look like they are against teaching ‘all’ of science to students.”

How can such an appeal be objectionable? This is a meaningless point of contention. I don’t know whether the slogan, “teach the controversy” does indeed “deliberately” try “to make opponents look like they are against teaching ‘all’ of science to students.” That should not be the issue.

My position is this:

1. The slogan is harmless, and should be the motto of any individual or group interested in education and advancement of science. This should be a universally accepted ideal.

2. I fully believe and am entirely convinced that the mainstream scientific community does indeed adhere to censorship, and present a one-sided and therefore distorted portrayal of the facts and empirical data.

The fact remains that Intelligent Design is a scientifically fit theory that is about information, not designers.  ID is largely based upon the work of William Dembski, in which he introduced the concept of Complex Specified Information in 1998.  In 1996, biochemist Michael Behe championed the ID-inspired hypothesis of irreducible complexity.  It’s been 17 years since Behe made the predictions of irreducible complexity in his book, “Darwin’s Black Box,” and to this day the four proposed systems to be irreducibly complex have not yet been falsified after thorough examination by molecular biologists.  Those four biochemical systems are the blood-clotting cascade, bacterial flagellum, immune system, and the cilium.

The Wedge2

II. EXCEPT QUOTATIONS OF THE WEDGE ARE ALSO IRRELEVANT BECAUSE THE DISCOVERY INSTITUTE HAS ALREADY PROVIDED THEIR UPDATED REVISION OF THE DOC.

Please keep in mind that my initial concerns about complaints concerning the Wedge document are primarily based upon relevance.  The Discovery Institute repealed and amended the Wedge.  Additionally, the Discovery Institute added extra commentary to clarify their present position.  It’s interesting when I am presented links to the Wedge Document, it is often the updated revised draft.  This being so, then it makes it questionable as to why critics continue quoting from the former outdated and obsolete version.  It is quite a comical obsolete argument that goes against the complainant’s credibility.  In fact, it’s an exercise of the same intellectual dishonesty that the ID antagonists is accusing of the Discovery Institute.

If one desires to criticize the views of the Discovery Institute, then such a person must use the materials that they claim are the actual present position held by the Discovery Institute and ID proponents.  I would further add:

1. ID proponents repudiate the Wedge, and distance themselves from it.

2. Mr. Johnson who authored the Wedge is retired, and that the document is obsolete.

Much about Intelligent Design theory has nothing to do with ideology or religion, such as when ID is demonstrated as an applied science “Intelligent Design” is simply just another word for Bio-Design.  Aside from biomimicry and biomimetics, other areas of science overlap into the definition of ID Theory, such as Natural Genetic Engineering, quantum biology, bioinformatics, bio-inspired nanotechnology, selective breeding, biotechnology, genetic engineering, synthetic biology, bionics, prosthetic implants, to name a few. 

The Wedge
III. THOSE WHO RELY UPON THE WEDGE AND MOVIE EXPELLED ARGUMENTS AGAINST THE MOTIVES OF THE DISCOVERY INSTITUTE FAIL TO MEET THE RELEVANCE REQUIREMENT.

ID antagonists claim:

“The very conception of ‘Intelligent Design’ entails just how ‘secular’ and ‘scientific’ the group tried to make their ‘theory’ sound.  It was created with Christian intentions in mind.”

This is circular reasoning, which is a logic fallacy.  The idea just restates the opening thesis argument as the conclusion, and does nothing to support the conclusion.  It also does not overcome the relevance issue as to the views maintained by the Discovery Institute and ID advocates today.

There is no evidence offered by those who raise the Wedge complaint to connect a religious or ideological motive to ID advocacy. ID Theory must be provided the same opportunity to make predictions, and test a repeatable and falsifiable design-inspired hypothesis.  If anyone has a problem with this, then they own the burden of proof to show why ID scientists are disqualified from performing the scientific method.  In other words, to reject ID on the sole basis of the Wedge document is essentially unjustifiable discrimination based upon a difference of opinion of ideological views.   At the end of the day, the only way to falsify a scientific falsifiable scientific hypothesis is to run the experimentation, and use the empirical data to confirm a claim.

Intelligent Design can be expressed as a scientific theory.  Valid scientific predictions can be premised based upon a pro ID-inspired conjecture.  The issue is whether or not ID actually conforms to the scientific method. If it does, then the objection by ID opponents is without merit and irrelevant. If ID fails in scientific reasoning, then critics simply need to demonstrate that, and then they will be vindicated.  Otherwise, ID Theory remains a perfectly valid testable and falsifiable proposition regardless of its social issues.

So far, ID critics have not made any attempt to offer one solitary scientific argument or employ scientific reasoning as to the basis of ID Theory.

Posted in Uncategorized | Tagged , | 3 Comments

DOES EVOLUTION ALONE INCREASE INFORMATION IN A GENOME?

This is in response to the video entitled, “Evolution CAN Increase Information (Classroom Edition).”

I agree with the basic presentation of Shannon’s work in the video, along with its evaluation of Information Theory, the Information Theory definition of “information,” bits, noise, and redundancy.  I also accept the fact that new genes evolve, as described in the video. So far, so good.I have some objections to the video, including the underlying premise, which I consider to be a strawman.

To illustrate quantifying information into bits, Shannon referenced an attempt to receive a one-way radio/telephone transmission signal.

Before I outline my dissent, here’s what I think the problem is. This is likely the result of creationists hijacking work done by ID scientists, in this case William Dembski, and arguing against evolution using flawed reasoning that misrepresents ID scientists. I have no doubt that there are creationists who could benefit by watching this video and learn how they were mistaken in raising the argument the narrative in the video refutes. But, that flawed argument misinterprets Dembski’s writings.

ID Theory is grounded upon Dembski’s development in the field of informatics, based upon Shannon’s work. Dembski took Shannon Information further, and applied mathematical theorems to develop a special and unique concept of information called COMPLEX SPECIFIED INFORMATION (CSI), aka “Specified Information.” I have written about CSI in several blog articles, but this one is my most thorough discussion on CSI.

I often am guilty myself of describing the weakness of evolutionary theory to be based upon the inability to increase information. In fact, my exact line that I have probably said a hundred times over the last few years goes like this:

“Unlike evolution, which explains diversity and inheritance, ID Theory best explains complexity, and how information increases in the genome of a population leading to greater specified complexity.”

I agree with the author of this video script that my general statement is so overly broad that it is vague, and easily refuted because of specific instances when new genes evolve. Of course, of those examples, Nylonase is certainly an impressive adaptation to say the least.

But, I don’t stop there at my general comment to rest my case. I am ready to continue by clarifying what I mean when I talk about “information” in the context of ID Theory. The kind of “information” we are interested is CSI, which is both complex and specified. Now, there are many instances where biological complexity is specified, but Dembski was not ready to label these “design” until the improbability reaches the Universal Probability Bound of 1 x 10^–150. Such an event is unlikely to occur by chance. This is all in Dembski’s book, “The Design Inference” (1998).

According to ID scientists, CSI occurs early, in that it’s in the very molecular machinery required to comprise the first reproducing cell already in existed when life originated. The first cell already has its own genome, its own genes, and enough bits of information up front as a given for frameshift, deletion, insertion, and duplication types of mutations to occur. The information, noise, and redundancy required to make it possible for there to be mutations is part of the initial setup.

Dembski has long argued, which is essentially the crux of the No Free Lunch theorems, that neither evolution or genetic algorithms produce CSI.  Evolution only smuggles CSI forward. Evolution is the mechanism that includes the very mutations and process to increase the information as demonstrated in the video. But, according to ID scientists, the DNA, genes, start-up information, reproduction system, RNA replication, transcription, and protein folding equipment were there from the very start, and that the bits and materials required in order for the mutations to occur were front-loaded in advance. Evolution only carries it forward into fruition in the phenotype.  I discuss Dembski’s No Free Lunch more fully here.

DNA binary

Dembski wrote:

“Consider a spy who needs to determine the intentions of an enemy—whether that enemy intends to go to war or preserve the peace. The spy agrees with headquarters about what signal will indicate war and what signal will indicate peace. Let’s imagine that the spy will send headquarters a radio transmission and that each transmission takes the form of a bit string (i.e., a sequence of 0s and 1s ). The spy and headquarters might therefore agree that 0 means war and 1 means peace. But because noise along the communication channel might flip a 0 to a 1 and vice versa, it might be good to have some redundancy in the transmission. Thus the spy and headquarter s might agree that 000 represents war and 111 peace and that anything else will be regarded as a garbled transmission. Or perhaps they will agree to let 0 represent a dot and 1 a dash and let the spy communicate via Morse code in plain English whether the enemy plans to go to war or maintain peace.

“This example illustrates how information, in the sense of meaning, can remain constant whereas the vehicle for representing and transmitting this information can vary. In ordinary life we are concerned with meaning. If we are at headquarters, we want to know whether we’re going to war or staying at peace. Yet from the vantage of mathematical information theory, the only thing that’s important here is the mathematical properties of the linguistic expressions we use to represent the meaning. If we represent war with 000 as opposed to 0, we require three times as many bits to represent war, and so from the vantage of mathematical information theory we are utilizing three times as much information. The information content of 000 is three bits whereas that of 0 is just one bit.” [Source: Information-Theoretic Design Argument]

My main objection to the script is toward the end where the narrator, Shane Killian, states that if anyone has a different understanding of the definition of information, and prefers to challenge the strict definition that “information” is a reduction in uncertainty, that their rebuttal should be outright dismissed. I personally agree with Shannon, so I don’t have a problem with it, but there are other applications in computer science, bioinformatics, electrical engineering, and a host of other academic disciplines that have their own definitions of information that emphasize different dynamics than Shannon did.

Shannon made huge contributions to these fields, but his one-way radio/telephone transmission analogy is not the only way to understand the concept of information.  Shannon discusses these concepts in his 1948 paper on Information Theory.  Moreover, even though that Shannon’s work was the basis of Dembski’s work, ID Theory relates to the complexity and specificity of information, not just in quantification of “information” alone per se.

Claude Shannon is credited as the father and discoverer of Information Theory.

Posted in COMPLEX SPECIFIED INFORMATION (CSI), INFORMATION THEORY | Tagged , , , | 3 Comments

MICHAEL BEHE ON THE WITNESS STAND

As most people are aware, Michael Behe championed the design-inspired ID Theory hypothesis of Irreducible Complexity.  Michael Behe testified as an expert witness in Kitzmiller v. Dover (2005). 

behe-smile

Transcripts of all the testimony and proceedings of the Dover trial are available hereWhile under oath, he testified that his argument was:

“[T]hat the [scientific] literature has no detailed rigorous explanations for how complex biochemical systems could arise by a random mutation or natural selection.”

Behe was specifically referencing origin of life, molecular and cellular machinery. The cases in point were specifically the bacterial flagellum, cilia, blood-clotting cascade, and the immune system because that’s what Behe wrote about in his book, “Darwin’s Black Box” (1996).

The attorneys piled up a stack of publications regarding the evolution of the immune system just in front of Behe on the witness stand while he was under oath. Behe is criticized by anti-ID antagonists for dismissing the books.

Michael Behe testifies as an expert witness in Kitzmiller v. Dover. Illustration is by Steve Brodner, published in The New Yorker on Dec. 5, 2005.

The books were essentially how the immune system developed in vertebrates.  But, that isn’t what Intelligent Design theory is based upon. ID Theory is based upon complexity appearing at the outset of life when life first arose, and the complexity that appears during the Cambrian Explosion.

The biochemical structures Behe predicted to be irreducibly complex (bacterial flagellum, cilium, blood-clotting, and immune system) arose during the development of the first cell.  These biochemical systems occur at the molecular level in unicellular eukarya organisms, as evidenced by the fact that retroviruses are in the DNA of these most primitive life forms.  They are complex, highly conserved, and are irreducibly complex.  You can stack a mountain of books and scientific literature on top of this in re how these biochemical systems morphed from that juncture and forward into time, but that has nothing to do with the irreducible complexity of the original molecular machinery. 

The issue regarding irreducible complexity is the source of the original information that produced the irreducibly complex system in the first place.  The scientific literature on the immune system only addresses changes in the immune system after the system already existed and was in place.  For example, the Type III Secretion System Injector (T3SS) is often used to refute the irreducible complexity of flagellar bacteria.  But, the T3SS is not an evolutionary precursor of a bacteria flagella; it was derived subsequently and is evidence of a decrease in information.

The examining attorney, Eric Rothschild, stacked up those books one on top the other for courtroom theatrics.

Behe testified:

“These articles are excellent articles I assume. However, they do not address the question that I am posing. So it’s not that they aren’t good enough. It’s simply that they are addressed to a different subject.”

Those who reject ID Theory and dislike Michael Behe emphasize that since Behe is the one making the claim that the immune system is Irreducibly Complex, then Behe owns the burden to maintain a level of knowledge as what other scientists write on the subject.  It should be noted that there indeed has been a wealth of research on the immune system and the collective whole of the papers published gives us a picture of how the immune system evolved. But, the point that Behe made was there is very little knowledge available, if any, as to how the immune system first arose.

The burden was on the ACLU attorneys representing Kitzmiller to cure the defects of foundation and relevance. But, they never did. But, somehow anti-ID antagonists spin this around to make it look like somehow Behe was in the wrong here, which is entirely unfounded.  Michael Behe responded to the Dover opinion written by John E. Jones III hereOne comment in particular Behe had to say is this:

“I said in my testimony that the studies may have been fine as far as they went, but that they certainly did not present detailed, rigorous explanations for the evolution of the immune system by random mutation and natural selection — if they had, that knowledge would be reflected in more recent studies that I had had a chance to read.

In a live PowerPoint presentation, Behe had additional comments to make about how the opinion of judge John E. Jones III was not authored by the judge at all, but by an ACLU attorney.  You can see that lecture here.

Immunology

Piling up a stack of books in front of a witness without notice or providing a chance to review the literature before they can provide an educated comment has no value other than courtroom theatrics.

The subject was clear that the issue was biological complexity appearing suddenly at the dawn of life. Behe had no burden to go on a fishing expedition through that material. It was up to the examining attorney to direct Behe’s attention to the specific topic and ask direct questions. But, the attorney never did that.

One of the members on the opposition for Kitzmiller is Nicholas Matzke, who is employed by the NCSEThe NCSE was originally called upon early by the Kitzmiller plaintiffs, and later the ACLU retained to represent Kitzmiller.  Nick Matzke had been handling the evolution curriculum conflict at Dover as early as the summer of 2004.  Matzke tells the story as to how he worked with Barbara Forrest, on the history of ID, and with Kenneth Miller, their anti-Behe expert.  Matzke writes,

“Eric Rothschild and I knew that defense expert Michael Behe was the scientific centerpoint of the whole case — if Behe was found to be credible, then the defense had at least a chance of prevailing. But if we could debunk Behe and the “irreducible complexity” argument — the best argument that ID had — then the defense’s positive case would be sunk.”

Matzke offered guidance on the deposition questions for Michael Behe and Scott Minnich, and was present when Behe and Minnich were deposed.  When Eric Rothschild, the attorney who cross-examined Behe in the trial, flew out to Berkeley for Kevin Padian’s deposition, the NCSE discussed with Rothschild how to deal with Behe.  Matzke describes their strategy:

“One key result was convincing Rothschild that Behe’s biggest weakness was the evolution of the immune system. This developed into the “immune system episode” of the Behe cross-examination at trial, where we stacked up books and articles on the evolution of the immune system on Behe’s witness stand, and he dismissed them all with a wave of his hand.”

It should be noted that as detailed and involved as the topic on the evolution of the vertebrate immune system is, the fact remains that to this day Michael Behe’s 1996 prediction that the immune system is irreducibly complex has not yet been falsified even though it is very much falsifiable.  

Again, to repeat the point I made above in regarding the courtroom theatrics with the stacking of the pile of books in front of Behe, the burden was not on Behe to sift through the material to find evidence that would support Kitzmiller. It is up to the ACLU attorneys to direct Behe’s attention in those books and publications where complex biochemical life and the immune system first arose, and then ask questions specifically to that topic. But, since Behe was correct in that the material was not responsive to the issue in the examination, there was nothing left for the attorneys to do except engage in theatrics.

Posted in IRREDUCIBLE COMPLEXITY, KITZMILLER V. DOVER AND LEGAL ISSUES | Tagged , , | 4 Comments

Response to Claim That ID Theory Is An Argument from Incredulity

The Contention That Intelligent Design Theory Succumbs To A Logic Fallacy:

It is argued by those who object to the validity of ID Theory that the proposition of design in nature is an argument from ignorance.   There is no validity to this unfounded claim because design in nature is well-established by the work of William Dembski.  For example, here is a database of writings of Dembski. Not only are the writings of Dembski peer-reviewed and published, but so are rebuttals that were written in response of his work.  Dembski is the person who coined the phrase Complex Specified Information, and how it is convincing evidence for design in nature.

Informal Fallacy

The Alleged Gap Argument Problem With Irreducible Complexity:

The argument from ignorance allegation against ID Theory is based upon the design-inspired hypothesis championed by Michael Behe, which is known as Irreducible Complexity. It is erroneous to claim ID is based upon an argument from incredulity* because ID Theory makes no appeals to the unobservable, supernatural, paranormal, or anything that is metaphysical or outside the scope of science.  However, the assertion that the Irreducible Complexity hypothesis is a gap argument is a valid objection that does need a closer view to determine if the criticism of irreducible complexity is valid.

An irreducibly complex system is one that (a) the removal of a protein renders the molecular machine inoperable, and (b) the biochemical structure has no stepwise evolutionary pathway.

Here’s how one would set up examination by using gene knockout, reverse engineering, study of homology, and genome sequencing:

I. To CONFIRM Irreducible Complexity:

Show:

1. The molecular machine fails to operate upon the removal of a protein.

AND,

2. The biochemical structure has no evolutionary precursor.

II. To FALSIFY Irreducible Complexity:

Show:

1. The molecular machine still functions upon loss of a protein.

OR,

2. The biochemical structure DOES have an evolutionary pathway.

The 2 qualifiers make falsification easier, and confirmation more difficult.

Those who object to irreducible complexity often raise the argument that the irreducible complexity hypothesis is based upon there being gaps or negative evidence.   Such critics claim that irreducible complexity is not based upon affirmative evidence, but on a lack of evidence, and as such, irreducible complexity is a gap argument, also known as an argument from ignorance.  However, this assertion that irreducible complexity is nothing other than a gap argument is false.

According to the definition of irreducible complexity, the hypothesis can be falsified EITHER way, by (a) demonstrating the biochemical system still performs its original function upon the removal of any gene that makes up its parts, or (b) showing that there are missing mutations that were skipped, i.e., there is no stepwise evolutionary pathway or precursor.  Irreducible complexity can still be falsified even if no evolutionary precursor is found because of the functionality qualifier.   In other words, the mere fact that there is no stepwise evolutionary pathway does not automatically mean that the system is irreducibly complex.  To confirm irreducible complexity, BOTH qualifiers must be satisfied.  But, it only takes one of the qualifiers to falsify irreducible complexity.  As such, the claim that irreducible complexity is fatally tied to a gap argument is without merit.

It is true that there very much exists a legitimate logic fallacy known as proving a negative.  The question is whether there is such a thing as proving nonexistence. It’s a logic fallacy. While it is true that it is impossible to prove a negative or provide negative proof, it is very much logically valid to limit a search to find a target within a reasonable search space and obtain a quantity of zero as a scientifically valid answer.

Solving a logic problem might be a challenged, but there is a methodical procedure that will lead to success.  The cure to the logic fallacy, is to correct the error and solve the problem.

Solving a logic problem might be a challenge, but there is a methodical procedure that will lead to success. The cure to a logic fallacy, is to simply correct the error and solve the problem.

The reason why the irreducible complexity hypothesis is logically valid is because there is no attempt to base the prediction that certain biochemical molecular machinery are irreducibly complex based upon absence of evidenceIf this were so, then the critics would be correct.  But, this is not the case.  Instead, the irreducible complexity hypothesis requires research, such as various procedures in molecular biology as (a) gene knockout, (b) reverse engineering, (c) examining homologous systems, and (d) sequencing the genome of the biochemical structure.  The gene knockout procedure was used by Scott Minnich in 2004-2005 to show that the removal of any of the proteins of a bacterial flagellum will render that bacteria incapable of motility (can’t swim anymore).  Michael Behe also mentions (e) yet another way as to how testing irreducible complexity using gene knockout procedure might falsify the hypothesis here.

When the hypothesis of irreducible complexity is tested in the lab using any of the procedures directly noted above, an obvious thorough investigation is conducted that demonstrates evidence of absence. There is a huge difference between absence of evidence and evidence of absence.  One is a logic fallacy while the other is an empirically generated result, a scientifically valid quantity that is concluded upon thorough examination.  So, depending upon the analysis, you can prove a negative.

Evidence of Absence

Here’s an excellent example as to why irreducible complexity is logically valid, and not an argument from ignorance.  If I were to ask you if you had change for a dollar, you could say, “Sorry, I don’t have any change.” If you make a diligent search in your pockets to discover there are indeed no coins anywhere to be found on your person, then you have affirmatively proven a negative that your pockets were empty of any loose change. Confirming that you had no change in your pockets was not an argument from ignorance because you conducted a thorough examination and found it to be an affirmatively true statement.

The term, irreducible complexity, was coined by Michael Behe in his book, “Darwin’s Black Box” (1996).  In that book, Behe predicted that certain biochemical systems would be found to be irreducibly complex.  Those specific systems were (a) the bacterial flagellum, (b) cilium, (c) blood-clotting cascade, and (d) immune system.   It’s now 2013 at the time of writing this essay.  For 17 years, the research has been conducted, and the flagellum has been shown to be irreducibly complex. It’s been thoroughly researched, reverse engineered, and its genome sequenced. It is a scientific fact that the flagellum has no precursor. That’s not a guess. It is not stated as ignorance from taking some wild uneducated guess. It’s not a tossing one’s hands up in the air saying, “I give up.” It is a scientific conclusion based upon thorough examination.

Logic Fallacies

Logic fallacies, such as circular reasoning, argument from ignorance, red herring, strawman argument, special pleading, and others are based upon philosophy and rhetoric. While they might lend to the merit of a scientific conclusion, it is up to the peer-review process to determine the validity of a scientific hypothesis.

Again, if you were asked how much change do you have in your pockets. You can put your hand in your pocket, look to see how many coins are there. If there is no loose change, it is NOT an argument from ignorance to state, “Sorry, I don’t have any spare change.” You didn’t guess. You stuck your hands in your pockets and looked, and scientifically deduced the quantity to be zero. The same is true with irreducible complexity. After the search has taken place, the prediction the biochemical system is irreducibly complex is upheld and verified. Hence, there is no argument from ignorance.

The accusation that irreducible complexity is an argument from ignorance essentially suggests a surrender and abandonment of ever attempting to empirically determine whether the prediction is scientifically correct.  It’s absurd for anyone to suggest that ID scientists are not interested in finding Darwinian mechanisms responsible for the evolution of an irreducible complex biochemical structure. If you lost money in your wallet, it would be ridiculous for someone to accuse you of rejecting any interest in recovering your money. That’s essentially what is being claimed when someone draws the argument from ignorance accusation. The fact is you know you did look (you might have turned your house upside down looking), and know for a fact that the money is missing. It doesn’t mean that you might still find it (the premise is still falsifiable). But, a thorough examination took place, and you determined the money is gone.

Consider Mysterious Roving Rocks:

On a sun-scorched plateau known as Racetrack Playa in Death Valley, California, rocks of all sizes glide across the desert floor.  Some of the rocks accompany each other in pairs, which creates parallel trails even when turning corners so that the tracks left behind resemble those of an automobile.  Other rocks travel solo the distance of hundreds of meters back and forth along the same track.  Sometimes these paths lead to its stone vehicle, while other trails lead to nowhere, as the marking instrument has vanished.

Roving Rocks

Some of these rocks weigh several hundred pounds. That makes the question: “How do they move?” a very challenging one.  The truth is no one knows just exactly how these rocks move.   No one has ever seen them in motion.  So, how is this phenomenon explained?

A few people have reported seeing Racetrack Playa covered by a thin layer of ice. One idea is that water freezes around the rocks and then wind, blowing across the top of the ice, drags the ice sheet with its embedded rocks across the surface of the playa.  Some researchers have found highly congruent trails on multiple rocks that strongly support this movement theory.  Other suggest wind to be the energy source behind the movement of the roving rocks.

The point is that anyone’s guess, prediction, speculation is as good as that of anyone else.  All these predictions are testable and falsifiable by simply setting up instrumentation to monitor the movements of the rocks.  Are any of these predictions an argument from ignorance?  No.  As long as the inquisitive examiner makes an effort to determine the answer, this is a perfectly valid scientific endeavor. 

The argument from ignorance would only apply when someone gives up, and just draws a conclusion without any further attempt to gain empirical data.  It is not a logic fallacy in and of itself on the sole basis that there is a gap of knowledge as to how the rocks moved from Point A to Point B.  The only logic fallacy would be to draw a conclusion while resisting further examination.  Such is not the case with irreducible complexity.  The hypothesis has endured 17 years of laboratory research by molecular biologists, and the research continues to this very day.

The Logic Fallacy Has No Bearing On Falsifiability:

Here’s yet another example as to why irreducible complexity is scientifically falsifiable, and therefore not an argument from ignorance logic fallacy.  If someone was correct in asserting the argument from incredulity fallacy, then they have eliminated all science. Newton’s law of gravity was an argument from ignorance because he didn’t know anything more than what he had discovered. It was later falsified by Einstein. So, according to this flawed logic, Einstein’s theory of relativity is an argument from ignorance because there might be someone in the future who will falsify it with a Theory of Everything.

Whether a hypothesis passes the Argument of Ignorance logic criterion, or not, the argument is an entirely philosophical one, much like how a mathematical argument might be asserted.  If the argument from ignorance were applied in peer-review to all science papers submitted for publication, the science journals would be near empty of any documents to reference.  Science is not based upon philosophical objections and arguments.  Science is based upon the definition of science, which is observation, falsifiable hypothesis, experimentation, results and conclusion. It is the fact that these methodical elements are in place which makes science based upon what it is supposed to be, and that is empiricism.

Scientific Method

Whether a scientific hypothesis is falsifiable is not affected by philosophical arguments based upon logic fallacies.   Irreducible Complexity is very much falsifiable based upon its definition.  The argument from ignorance only attacks the significance of the results and conclusion of research in irreducible complexity; it doesn’t deter irreducible complexity from being falsifiable.  In fact, the argument from ignorance objection actually emphasizes just the opposite, that irreducible complexity might be falsified tomorrow because it inherently argues the optimism that its just a matter of time that an evolutionary pathway will be discovered in future research.  This is not a bad thing; the fact that irreducible complexity is falsifiable is a good thing.  That testability and obtainable goalpost is what you want in a scientific hypothesis.

ID Theory Is Much More Than Just The One Hypothesis of Irreducible Complexity:

ID Theory is also an applied science as well, click here for examples in biomimicry.  Intelligent Design is also an applied science in areas of bioengineering, nanotechnology, selective breeding, and bioinformatics, to name a few applications.  ID Theory is a study of information and design in nature.  And, there are design-inspired conjectures as to where the source of information originates, such as the rapidly growing field of quantum biology, Natural Genetic Engineering, and front-loading via panspermia.

In conclusion, the prediction that there are certain biochemical systems that exist of which are irreducibly complex is not a gaps argument.  The definition of irreducible complexity is stated above, and it is very much a testable, repeatable, and falsifiable hypothesis.  It is a prediction that certain molecular machinery will not operate upon the removal of a part, and have no stepwise evolutionary precursor.  This was predicted by Behe 17 years ago, and still remains true, as evidenced by the bacteria flagellum, as an example.

*  Even though these two are technically distinguishable logic fallacies, the argument from incredulity is so similar to the argument from ignorance that for purposes of discussion I treat the terms as synonymous.

Posted in LOGIC FALLACIES | Tagged , , , , | 1 Comment

RESPONSE TO THE MARK PERAKH CRITIQUE, “THERE IS A FREE LUNCH AFTER ALL: WILLIAM DEMBSKI’S WRONG ANSWERS TO IRRELEVANT QUESTIONS”

I. INTRODUCTION

This essay is a reply to chapter 11 of the book authored by Mark Perakh entitled, Why Intelligent Design Fails: A Scientific Critique of the New Creationism (2004).  The chapter can be review here.  Chapter 11, “There is a Free Lunch After All: William Dembski’s Wrong Answers to Irrelevant Questions,” is a rebuttal to the book written by William Dembski entitled, No Free Lunch (2002).  Mark Perakh’s also authored another anti-ID book, “Unintelligent Design.”  The Discovery Institute replied to Perakh’s work here.

The book by William Dembski, No Free Lunch (2002) is a sequel to his classic, The Design Inference (1998). The Design Inference used mathematical theorems to define design in terms of a chance and statistical improbability.  In The Design Inference, Dembski explains complexity, and demonstrated that when complex information is specified, it determines design.  Simply put, Complex Specified Information (CSI) = design.  It’s CSI that is the technical term that mathematicians, information theorists, and ID scientists can work with to determine whether some phenomenon or complex pattern is designed.

One of the most important contributors to ID Theory is American mathematician Claude Shannon, who is considered to be the father of Information Theory. Essentially, ID Theory is a sub-theory of Information Theory in the field of Bioinformatics. This is one of Dembski’s areas of expertise.

Claude Shannon is seen here with Theseus, his magnetic mouse. The mouse was designed to search through the corridors until it found the target.

Claude Shannon pioneered the foundations for modern Information Theory.  His identifying units of information that can be quantified and applied in fields such as computer science is still called Shannon Information to this day.

Shannon invented a mouse that was programmed to navigate through a maze to search for a target, concepts that are integral to Dembski’s mathematical theorems of which are based upon Information Theory.  Once the mouse solved the maze it could be placed anywhere it had been before and use its prior experience to go directly to the target. If placed in unfamiliar territory, the mouse would continue the search until it reached a known location and then proceed to the target.  The ability of the device to add new knowledge to its memory is believed to be the first occurrence of artificial learning.

In 1950 Shannon published a paper on computer chess entitled Programming a Computer for Playing Chess. It describes how a machine or computer could be made to play a reasonable game of chess. His process for having the computer decide on which move to make is a minimax procedure, based on an evaluation function of a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position. Material was counted according to the usual relative chess piece relative value. (http://en.wikipedia.org/wiki/Claude_Shannon).

Shannon’s work obviously involved applying what he knew at the time for the computer program to scan all possibilities for any given configuration on the chess board to determine the best optimum move to make.  As you will see, this application of a search within any given phase space that might occur during the course of the game for a target, which is one fitness function among many as characterized in computer chess is exactly what the debate is about with Dembski’s No Free Lunch (NFL) Theorems.

When Robert Deyes wrote a review on Stephen Meyer’s “Signature In The Cell,” he noted “When talking about ‘information’ and its relevance to biological design, Intelligent Design theorists have a particular definition in mind.”  Stephen Meyer explained in “Signature In The Cell” that information is: “the attribute inherent in and communicated by alternative sequences or arrangements of something that produce specific effects” (p.86).

When Shannon unveiled his theory for quantifying information it included several axioms, one of which is information is inversely proportional to uncertainty. Similarly, design can be contrasted to chance.

II. COMPLEX SPECIFIED INFORMATION (CSI):

CSI is based upon the theorem:

sp(E) and SP(E)  D(E)

When a small probably (SP) event (E) is complex, and

SP(E) = [P (E|I) < the Universal Probability Bound]. Or, in English, we know an event E is a small probably event when the probability of event E given I is less than the Universal Probability Bound. I = All relevant side information and all stochastic hypotheses. This is all in Dembski’s book, The Design Inference.

An event E is specified by a pattern independent of E, or expressed mathematically: sp(E). Upper case letters SP(E) are the small probability event we are attempting to determine is CSI, or designed. Lower case letters sp(E) are a prediction that we will discover the SP(E). Therefore, if sp(E) and SP(E) then  D(E). D(E) means the event E is not only small probability, but we can conclude it is designed.

Dembski’s Universal Probability Bound = 0.5 x 10-150, or 0.5 times 10 to the exponent negative 150 power. This is the magic number when one can scientifically be justified to invoke design. It’s been said that using Dembski’s formula, the probability that Dembski states must be matched in order to ascribe design is to announce in advance before dealing that you are going to be dealt 24 Royal Flushes in a row, and then the event plays out just exactly as the advance forecast. In other words, just as intelligence might be entirely illusory, so likewise CSI is nothing other than a mathematical ratio that might not have anything in the world to do with actual design.

The probability of dealing a Royal flush given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement is 649,739 to 1. According to Dembski, if someone were to be dealt a Royal Flush 24 times in a row upon the advance announcement predicting such a happening would take place, his contentions would be that it was so improbable that someone cheated, or “design” would have had to been involved.

The probability of being dealt a Royal flush given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement is 649,739 to 1.

I’m oversimplifying CSI just for the sake of making this point that we have already imposed upon the term “design” a technical definition that requires no intelligence or design as we understand the everyday normal use of the words.  What’s important is that just as improbable as it is to be dealt a Royal Flush, so likewise the level of difficulty natural selection is up against to produce what appears to be designed in nature. And, when CSI is observed in nature, which occurs occasionally, then that not only confirms ID predictions, and defies Darwinian gradualism, but also tips a scientist a clue that such might be evidence of additional ID-related mechanisms at work.

It is true that William Dembski’s theorems are based upon an assumption that we can quantify everything in the universe; no argument there. But, he only used that logic to derive his Universal Probability Bound, which is nearly an infinitesimally small number: 0.5 x 10-150, or 0.5 times 10 to the exponent negative 150 power. Do you not think that when a probability is this low that it is a safe bet to invoke corruption of natural processes by an intelligent agency? The number is a useful number.

I wrote two essays on CSI to provide a better understanding of specified complexity introduced in Dembski’s book, The Design Inference.  In this book, Dembski introduces and expands on the meaning of CSI, and then proceeds to present reasoning as to why CSI infers design.  The first essay I wrote on CSI here is an elementary introduction to the overall concept.  I wrote a second essay here that provides a more advances discussion on CSI.

CSI does show up in nature. That’s the whole point of the No Free Lunch Principle is that there is no way by which evolution can take credit for the occurrences when CSI shows up in nature.

III. NO FREE LUNCH

Basically, the book, “No Free Lunch” is a sequel to the earlier work, The Design Inference. While we get more calculations that confirm and verify Dembski’s earlier work, we also get new assertions made by Dembski. It is very important to note that ID Theory is based upon CSI that is established in The Design Inference. The main benefit of the second book, “No Free Lunch,” is that it further validates and verifies CSI, which was established in The Design Inference.  The importance of this fact cannot be overemphasized. Additionally, “No Free Lunch” further confirms the validity of the assertion that design in inseparable from intelligence.

Before “No Free Lunch,” there was little effort demonstrating that CSI is connected to intelligence. That’s a problem because CSI = design. So, if CSI = design, it should be demonstrable that CSI correlates and is directly proportional to intelligence. This is the thesis of what the book, “No Free Lunch” sets out to do. If “No Free Lunch” fails to successfully support the thesis that CSI correlates to intelligence, that would not necessarily impair ID Theory, but if Dembski succeeds, then it would all the more lend credibility to ID Theory and certainly all of Dembski’s work as well.

IV. PERAKH’S ARGUMENT

The outline of Perakh’s critique of Dembski’s No Free Lunch theorems is as follows:

1.    Methinks It Is like a Weasel—Again
2.    Is Specified Complexity Smuggled into Evolutionary Algorithms?
3.    Targetless Evolutionary Algorithms
4.    The No Free Lunch Theorems
5.    The NFL Theorems—Still with No Mathematics
6.    The No Free Lunch Theorems—A Little Mathematics
7.    The Displacement Problem
8.    The Irrelevance of the NFL Theorems
9.    The Displacement “Problem”

1.  METHINKS IT IS LIKE A WEASEL – AGAIN

One common demonstration to help people understand how CSI works is to take a letter sequence. This can be done with anything, but the common example is this pattern:

METHINKS•IT•IS•LIKE•A•WEASEL

This letter arrangement is used the most often to describe CSI because the math has already been worked out. The bullets represent spaces. There are 27 possibilities at each location in a symbol string 28 characters in length. If natural selection were entirely random it would take 1 x 1040 (that’s 10 to the 40th exponent, or 1 with 40 zeroes to get to the decimal point). It’s a small probability. However, natural selection (NS) is smarter than that, and Richard Dawkins has shown how NS achieves similar odds in nature in an impressive 43 attempts to get the answer correct, as Dembski notes here.

In this example, the probability was only 1 x 1040. CSI is an even much more higher number than that. If you take a pattern or model, such as METHINKS•IT•IS•LIKE•A•WEASEL, and you keep adding the information, you soon reach probabilities that are within the domain of CSI.

Dembski’s explanation to the target sequence of METHINKS•IT•IS•LIKE•A•WEASEL is as follows:

“Thus, in place of 1040 tries on average for pure chance to produce the target sequence, by employing the Darwinian mechanism it now takes on average less than 100 tries to produce it. In short, a search effectively impossible for pure chance becomes eminently feasible for the Darwinian mechanism.

“So does Dawkins’s evolutionary algorithm demonstrate the power of the Darwinian mechanism to create biological information? No. Clearly, the algorithm was stacked to produce the outcome Dawkins was after. Indeed, because the algorithm was constantly gauging the degree of difference between the current sequence from the target sequence, the very thing that the algorithm was supposed to create (i.e., the target sequence METHINKS•IT•IS•LIKE•A•WEASEL) was in fact smuggled into the algorithm from the start. The Darwinian mechanism, if it is to possess the power to create biological information, cannot merely veil and then unveil existing information. Rather, it must create novel information from scratch. Clearly, Dawkins’s algorithm does nothing of the sort.

“Ironically, though Dawkins uses a targeted search to illustrate the power of the Darwinian mechanism, he denies that this mechanism, as it operates in biological evolution (and thus outside a computer simulation), constitutes a targeted search. Thus, after giving his METHINKS•IT•IS•LIKE•A•WEASEL illustration, he immediately adds: “Life isn’t like that.  Evolution has no long-term goal. There is no long-distant target, no final perfection to serve as a criterion for selection.” [Footnote] Dawkins here fails to distinguish two equally valid and relevant ways of understanding targets: (i) targets as humanly constructed patterns that we arbitrarily impose on things in light of our needs and interests and (ii) targets as patterns that exist independently of us and therefore regardless of our needs and interests. In other words, targets can be extrinsic (i.e., imposed on things from outside) or intrinsic (i.e., inherent in things as such).

“In the field of evolutionary computing (to which Dawkins’s METHINKS•IT•IS•LIKE•A•WEASEL example belongs), targets are given extrinsically by programmers who attempt to solve problems of their choice and preference. Yet in biology, living forms have come about without our choice or preference. No human has imposed biological targets on nature. But the fact that things can be alive and functional in only certain ways and not in others indicates that nature sets her own targets. The targets of biology, we might say, are “natural kinds” (to borrow a term from philosophy). There are only so many ways that matter can be configured to be alive and, once alive, only so many ways it can be configured to serve different biological functions. Most of the ways open to evolution (chemical as well as biological evolution) are dead ends. Evolution may therefore be characterized as the search for alternative “live ends.” In other words, viability and functionality, by facilitating survival and reproduction, set the targets of evolutionary biology. Evolution, despite Dawkins’s denials, is therefore a targeted search after all.” (http://evoinfo.org/papers/ConsInfo_NoN.pdf).

Weasel Graph

This graph was presented by a blogger who ran just one run of the weasel algorithm for Fitness of “best match” for n = 100 and u = 0.2.

Perakh doesn’t make any argument here, but introduces the METHINKS IT IS LIKE A WEASEL configuration here to be the initial focus of what is to follow.  The only derogatory comment he makes with Dembski is to charge that Dembski is “inconsistent.”  But, there’s no excuse to accuse Dembski of any contradiction. Perakh  states himself, “Evolutionary algorithms may be both targeted and targetless” (Page 2).  He also admits that Dembski was correct in that “Searching for a target IS teleological” (Page 2).  Yet, Perakh blames Dembski to be at fault for simply noting the teleological inference, and falsely accuses Dembski of contradicting himself on this issue when there is no contradiction.  There’s no excuse for Perakh to accuse Dembksi is being inconsistent here when all he did was acknowledge that teleology should be noted and taken into account when discussing the subject.

Perakh also states on page 3 that Dembski lamented over the observation made by Dawkins.  This is unfounded rhetoric and ad hominem that does nothing to support Perakh’s claims.  There is no basis to assert or benefit to gain by suggesting that Dembski was emotionally dismayed because of the observations made by Dawkins.  The issue is a talking point for discussion.

Perakh correctly represents the fact, “While the meaningful sequence METHINKSITISLIKEAWEASEL is both complex and specified, a sequence NDEIRUABFDMOJHRINKE of the same length, which is gibberish, is complex but not specified” (page 4).  And, then he correctly reasons the following,

“If, though, the target sequence is meaningless, then, according to the above quotation from Behe, it possesses no SC. If the target phrase possesses no SC, then obviously no SC had to be “smuggled” into the algorithm.” Hence, if we follow Dembski’s ideas consistently, we have to conclude that the same algorithm “smuggles” SC if the target is meaningful but does not smuggle it if the target is gibberish.” (Emphasis in original, page 4)

Perakh then arrives at the illogical conclusion that such reasoning is “preposterous because algorithms are indifferent to the distinction between meaningful and gibberish targets.”  Perakh is correct that algorithms are indifferent to teleology and making distinctions.  But, he has no basis to criticize Dembski on this point.

Completed Jigsaw Puzzle

This 40-piece jigsaw puzzle is more complex than the Weasel problem that consists of only the letters M, E, T, H, I, N, K, S, L, A, W, S, plus a space.

In the Weasel problem submitted by Richard Dawkins, the solution (target) was provided to the computer up front.  The solution to the puzzle was embedded in the letters provided to the computer to arrange into an intelligible sentence.  The same analogy applies to a jigsaw puzzle.  There is only one end result picture the puzzle pieces can be assembled to achieve.  The information of the picture is embedded in the pieces and not lost from merely cutting the image picture into pieces.  One can still solve the puzzle if they are blinded up front from seeing what the target looks like.   There is only one solution to the Weasel problem, so it is a matter of deduction, and not a blind search as Perakh maintains.   The task the Weasel algorithm had to perform was to unscramble the letters and rearrange them in the correct sequence.

The METHINKS•IT•IS•LIKE•A•WEASEL algorithm was given up front to be the fitness function, and intentionally designed CSI to begin with.  It’s a matter of the definition of specified complexity (SC).  If information is both complex and specified, then it is CSI by definition, and CSI = SC.  They’re two ways to express the same identical concept.  Perakh is correct.  The algorithm has nothing in and of itself to do with the specified complexity of the target phrase.  The reason why a target phrase is specified complexity is because the complex pattern was specified up front to be the target in the first place, all of which was independent of the algorithm.  So, so far, Perakh has not made a point of argument yet.

Dembski makes subsequent comments about the weasel math here and here.

2.  IS SPECIFIED COMPLEXITY SMUGGLED INTO EVOLUTIONARY ALGORITHMS?

Perakh asserts on page 4 that “Dembski’s modified algorithm is as teleological as Dawkins’s original algorithm.”  So what?  This is a pointless red herring that Perakh continues work for no benefit or support of any argument against Dembski.  It’s essentially a non-argument.  All sides: Dembski, Dawkins, and Perakh himself have conceded up front that discussion of this topic is difficult without stumbling over anthropomorphism.  Dembski noted it up front, which is commendable; but somehow Perakh wrongfully tags this to be some fallacy of which Dembski is committing.

Personifying the algorithms to have teleological behavior was a fallacy noted up front.  So, there’s no basis for Perakh to allege that Dembski is somehow misapplying any logic in his discussion.  The point was acknowledged by all participants in the discussion from the very beginning.  Perakh is not inserting anything new here, but just being an annoyance to raise a point that was already noted.  Also, Perakh has yet to actually raise any actual argument yet.

Dembksi wrote in No Free Lunch (194–196) that evolutionary algorithms do not generate CSI, but can only “smuggle” it from a “higher order phase space.”  CSI is also called specified complexity (SC).   Perakh makes the ridiculous claim on page 4 that this point is irrelevant to biological evolution, but offers no reasoning as to why.  To support his challenge against Dembski, Perakh states, “since biological evolution has no long-term target, it requires no injection of SC.”

The question is whether it’s possible a biological algorithm caused the existence of the CSI.  Dembski says yes, and his theorems established in The Design Inference are enough to satisfy the claim.  But, Perakh is arguing here that the genetic algorithm is capable of generating the CSI.  Perakh states that natural selection is unaware of its result (page 4), which is true.  Then he says Dembski must, “offer evidence that extraneous information must be injected into the natural selection algorithm apart from that supplied by the fitness functions that arise naturally in the biosphere.”  Dembski shows this in “Life’s Conservation Law – Why Darwinian Evolution Cannot Create Biological Information.”

3.  TARGETLESS EVOLUTIONARY ALGORITHMS

Biomorphs

Biomorphs

Next, Perakh raises the example made by Richard Dawkins in “The Blind Watchmaker” in which Dawkins uses what he calls “biomorphs” as an argument against artificial selection.  While Dawkins exhibits an imaginative jab to ridicule ID Theory, raising the subject again by Perakh is pointless.  Dawkins used the illustration of biomorphs to contrast the difference between natural selection as opposed to artificial selection upon which ID Theory is based upon.  It’s an excellent example.  I commend Dawkins on coming up with these biomorph algorithms.  They are very unique and original.

The biomorphs created by Dawkins are actually different intersecting lines of various degrees of complexity, and resemble Rorschach figures often used by psychologists and psychiatrists.  Biomorphs depict both inanimate objects like a cradle and lamp, plus biological forms such as a scorpion, spider, and bat.   It is an entire departure from evolution as it is impossible to make any logical connection how a fox would evolve into a lunar lander, or how a tree frog would morph into a precision balance scale.  Since the idea is a departure from evolutionary logic of any kind, because no rationale to connect any of the forms is provided, it would be seemingly impossible to devise an algorithm that fits biomorphs.

Essentially, Dawkins used these biomorphs to propose a metaphysical conjecture.  The intent of Dawkins is to suggest ID Theory is a metaphysical contemplation while natural selection is entirely logical reality.  Dawkins explains the point in raising the idea of biomorphs is:

“… when we are prevented from making a journey in reality, the imagination is not a bad substitute. For those, like me, who are not mathematicians, the computer can be a powerful friend to the imagination. Like mathematics, it doesn’t only stretch the imagination. It also disciplines and controls it.”

Biomorphs submitted by Richard Dawkins from The Blind Watchmaker, figure 5 p. 61

This is an excellent point and well-taken. The idea Dawkins had to reference biomorphs in the discussion was brilliant.  Biomorphs are an excellent means to assist in helping someone distinguish the difference between natural selection verses artificial selection.  This is exactly the same point design theorists make when protesting the personification of natural selection to achieve reality-defying accomplishments.  What we can conclude is that scientists, regardless of whether they accept or reject ID Theory, dislike the invention of fiction to fill in unknown gaps of phenomena.

In the case of ID Theory, yes the theory of intelligent design is based upon artificial selection, just as Dawkins notes with his biomorphs.  But, unlike biomorphs and the claim of Dawkins, ID Theory still is based upon fully natural scientific conjectures.

4.  THE NO FREE LUNCH THEOREMS

In this section of the argument, Perakh doesn’t provide an argument.  He’s more interested in talking about his hobby, which is mountain climbing.

The premise offered by Dembski that Perakh seeks to refute is the statement in No Free Lunch, which reads, “The No Free Lunch theorems show that for evolutionary algorithms to output CSI they had first to receive a prior input of CSI.” (No Free Lunch, page 223).  Somehow, Perakh believes he can prove Dembski’s theorems false.  In order to accomplish the task, one would have to analyze Dembski’s theorems.  First of all, Dembski’s theorems take into account all the possible factors and variables that might apply, as opposed to the algorithms only.  Perakh doesn’t make anything close to such an evaluation.  Instead, Perakh does nothing but use the mountain climbing analogy to demonstrate we cannot know just exactly what algorithm natural selection will promote as opposed to which algorithms natural selection will overlook.  This fact is a given up front and not in dispute.  As such, Perakh presents a non-argument here that does nothing to challenge Dembski’s theorems in the slightest trace of a bit.  Perakh doesn’t even discuss the theorems, let alone refute them.

The whole idea here of the No Free Lunch theorems is to demonstrate how CSI is smuggled across many generations, and then shows up visibly in a phenotype of a life form countless generations later.  Many factors must be contemplated in this process including evolutionary algorithms.   Dembksi’s book, No Free Lunch, is about demonstrating how CSI is smuggled through, which is the whole point as to where the book’s name is derived.  If CSI is not manufactured by evolutionary processes, including genetic algorithms, then it had been displaced from the time it was initially front-loaded.  Hence, there’s no free lunch.

Front-Loading could be achieved several ways, one of which is via panspermia.

But, Perakh makes no attempt to discuss the theorems in this section, much less refute Dembski’s work.  I’ll discuss front-loading in the Conclusion.

5.  THE NO FREE LUNCH THEOREMS—STILL WITH NO MATHEMATICS

Perakh finally makes a valid point here.  He highlights a weakness in Dembski’s book that the calculations provided do little to account for an average performance of multiple algorithms in operation at the same time.

Referencing his mountain climbing analogy from the previous section, Perakh explains the fitness function is the height of peaks in a specific mountainous region.  In his example he designates the target of the search to be a specific peak P of height 6,000 meters above sea level.

“In this case the number n of iterations required to reach the predefined height of 6,000 meters may be chosen as the performance measure.  Then algorithm a1 performs better than algorithm a2 if a1 converges on the target in fewer steps than a2. If two algorithms generated the same sample after m iterations, then they would have found the target—peak P—after the same number n of iterations. The first NFL theorem tells us that the average probabilities of reaching peak P in m steps are the same for any two algorithms” (Emphasis in the original, page 10).

Since any two algorithms will have an equal average performance when all possible fitness landscapes are included, then the average number n of iterations required to locate the target is the same for any two algorithms if the averaging is done over all possible mountainous landscapes.

Therefore, Perakh concludes the no free lunch theorems of Dembski do not say anything  about the relative performance of algorithms a2 and a1 on a specific landscape. On a specific landscape, either a2 or a1 may happen to be much better than its competitor.  Perakh goes on to apply the same logic in a targetless context as well.

These points Perakh raises are well taken.  Subsequent to the writing of Perakh’s book in 2004, Dembski ultimately provided the supplemental math to cure these issues in his paper entitled, “Searching Large Spaces: Displacement and the No Free Lunch Regress” (March 2005), which is available for review here.  It should also be noted that Perakh concludes this section of chapter 11 by admitting that the No Free Lunch theorems “are certainly valid for evolutionary algorithms.”  If that is so, then there is no dispute.

6.  THE NO FREE LUNCH THEOREMS—A LITTLE MATHEMATICS

It is noted that Dembski’s first no free lunch theorem is correct. It is based upon any given algorithm performed m times. The result will be a time-ordered sample set d comprised of m measured values of f within the range Y. Let P be the conditional probability of having obtained a given sample after m iterations, for given f, Y, and m.

Then, the first equation is

Mathwhen a1 or a2 are two different algorithms.

Perakh emphasizes this summation is performed over “all possible fitness functions.”   In other words, Dembski’s first theorem proves that when algorithms are averaged over all possible fitness landscapes the results of a given search are the same for any pair of algorithms.  This is the most basic of Dembski’s theorems, but the most limited for application purposes.

The second equation applies the first one for time-dependent landscapes.  Perakh notes several difficulties in the no free lunch theorems including the fact that evolution is a “coevolutionary” process.  In other words, Dembski’s theorems apply to ecosystems that involve a set of genomes all searching for the same fixed fitness function.  But, Perakh argues that in the real biological world, the search space changes after each new generation.  The genome of any given population slightly evolves from one generation to the next.  Hence, the search space that the genomes are searching is modified with each new generation.

Chess

The game of Chess is played one successive procedural (evolutionary) step at a time. With each successive move (mutation) on the chessboard, the chess-playing algorithm must search for a different and new board configuration as to the next move the computer program (natural selection) should select for.

The no free lunch models discussed here are comparable to the computer chess game mentioned above.   With each slight modification (Darwinian gradualism) in the step by step process of the chess game, the pieces end up in different locations on the chessboard so that the search process starts all over again with a new and different search for a new target than the preceding search.

There is one optimum move that is better than others, which might be a preferred target.  Any other reasonable move on the chessboard is a fitness function.  But, the problem in evolution is not as clear. Natural selection is not only blind, and therefore conducts a blind search, but does not know what the target should be either.

Where Perakh is leading to with this foundation is he is going to suggest in the next section that given a target up front, like the chess solving algorithm has, there might be enough information  in the description of the target itself to assist the algorithm to succeed in at least locating a fitness function.  Whether Perakh is correct or not can be tested by applying the math.

As aforementioned, subsequent to the publication of Perakh’s book, Dembski ultimately provided the supplemental math to cure these issues in his paper entitled, “Searching Large Spaces: Displacement and the No Free Lunch Regress” (March 2005), which is available for review here.  It should also be noted that Perakh concludes this section of the chapter by admitting that the No Free Lunch theorem “are certainly valid for evolutionary algorithms.”

7.  THE DISPLACEMENT PROBLEM

As already mentioned, the no free lunch theorems show that for evolutionary algorithms to output CSI they first received a prior input of CSI.  There’s a term to describe this. It’s called displacement.  Dembski wrote in a paper entitled “Evolution’s Logic of Credulity:
An Unfettered Response to Allen Orr” (2002) the key point of writing No Free Lunch concerns displacement.  The “NFL theorems merely exemplify one instance not the general case.”

Dembski continues to explain displacement,

“The basic idea behind displacement is this: Suppose you need to search a space of possibilities. The space is so large and the possibilities individually so improbable that an exhaustive search is not feasible and a random search is highly unlikely to conclude the search successfully. As a consequence, you need some constraints on the search – some information to help guide the search to a solution (think of an Easter egg hunt where you either have to go it cold or where someone guides you by saying ‘warm’ and ‘warmer’). All such information that assists your search, however, resides in a search space of its own – an informational space. So the search of the original space gets displaced to a search of an informational space in which the crucial information that constrains the search of the original space resides” (Emphasis in the original, http://www.arn.org/docs/dembski/wd_logic_credulity.htm).

8.  THE IRRELEVANCE OF THE NFL THEOREMS

In the conclusion of his paper, Searching Large Spaces: Displacement and the No Free Lunch Regress (2005), Dembski writes:

“To appreciate the significance of the No Free Lunch Regress in this latter sense, consider the case of evolutionary biology. Evolutionary biology holds that various (stochastic) evolutionary mechanisms operating in nature facilitate the formation of biological structures and functions. These include preeminently the Darwinian mechanism of natural selection and random variation, but also others (e.g., genetic drift, lateral gene transfer, and symbiogenesis). There is a growing debate whether the mechanisms currently proposed by evolutionary biology are adequate to account for biological structures and functions (see, for example, Depew and Weber 1995, Behe 1996, and Dembski and Ruse 2004). Suppose they are. Suppose the evolutionary searches taking place in the biological world are highly effective assisted searches qua stochastic mechanisms that successfully locate biological structures and functions. Regardless, that success says nothing about whether stochastic mechanisms are in turn responsible for bringing about those assisted searches.” (https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.330.8289&rep=rep1&type=pdf).

Up until this juncture, Perakh admits, “Within the scope of their legitimate interpretation—when the conditions assumed for their derivation hold—the NFL theorems certainly apply” to evolutionary algorithms.  The only question so far in his critique up until this section was that he has argued the NFL theorems do not hold in the case of coevolution.  However, subsequent to this critique, Dembski resolved those issues.

Here, Perakh reasons that even if the NFL theorems were valid for coevolution, he still rejects Dembski’s work because they are irrelevant.  According to Perakh, if evolutionary algorithms can outperform random sampling, aka a “blind search,” then the NFL theorems are meaningless.  Perakh bases this assertion on the statement by Dembski on page 212 of No Free Lunch, which provides, “The No Free Lunch theorems show that evolutionary algorithms, apart from careful fine-tuning by a programmer, are no better than blind search and thus no better than pure chance.”

Therefore, for Perakh, if evolutionary algorithms refute this comment by Dembski by outperforming a blind search, then this is evidence the algorithms are capable of generating CSI.  If evolutionary algorithms generate CSI, then Dembski’s NFL theorems have been soundly falsified, along with ID Theory as well.  If such were the case, then Perakh would be correct, the NFL theorems would indeed be irrelevant.

Perakh rejects the intelligent design “careful fine-tuning by a programmer” terminology in favor of just as reasonable of a premise:

“If, though, a programmer can design an evolutionary algorithm which is fine-tuned to ascend certain fitness landscapes, what can prohibit a naturally arising evolutionary algorithm to fit in with the kinds of landscape it faces?” (Page 19)

Perakh explains how his thesis can be illustrated:

“Naturally arising fitness landscapes will frequently have a central peak topping relatively smooth slopes. If a certain property of an organism, such as its size, affects the organism’s survivability, then there must be a single value of the size most favorable to the organism’s fitness. If the organism is either too small or too large, its survival is at risk. If there is an optimal size that ensures the highest fitness, then the relevant fitness landscape must contain a single peak of the highest fitness surrounded by relatively smooth slopes” (Page 20).

The graphs in Fig. 11.1 schematically illustrate Perakh’s thesis:

Fitness Function

This is Figure 11.1 in Perakh’s book – Fitness as a function of some characteristic, in this case the size of an animal. Solid curve – the schematic presentation of a naturally arising fitness function, wherein the maximum fitness is achieved for a certain single-valued optimal animal’s size. Dashed curve – an imaginary rugged fitness function, which hardly can be encountered in the existing biosphere.

Subsequent to Perakh’s book published in 2004, Dembski did indeed resolve the issue raised here in his paper, “Conservation of Information in Search: Measuring the Cost of Success” (Sept. 2009), http://evoinfo.org/papers/2009_ConservationOfInformationInSearch.pdf. Dembski’s “Conservation of Information” paper starts with the foundation that there have been laws of information already discovered, and that idea’s such as Perakh’s thesis were falsified back in 1956 by Leon Brillouin, a pioneer in information theory.   Brillouin wrote, “The [computing] machine does not create any new information, but it performs a very valuable transformation of known information” (L. Brillouin, Science and Information Theory. New York: Academic, 1956).

In his paper, “Conservation of Information,” Dembski and his coauthor, Robert Marks, go on to demonstrate how laws of conservation of information render evolutionary algorithms incapable of generating CSI as Perakh had hoped for.  Throughout this chapter, Perakh continually cited the various works of information theorists, Wolpert and Macready.  On page 1051 in “Conservation of Information” (2009), Dembski and Marks also quote Wolpert and Macready:

“The no free lunch theorem (NFLT) likewise establishes the need for specific information about the search target to improve the chances of a successful search.  ‘[U]nless you can make prior assumptions about the . . . [problems] you are working on, then no search strategy, no matter how sophisticated, can be expected to perform better than any other.’  Search can be improved only by “incorporating problem-specific knowledge into the behavior of the [optimization or search] algorithm” (D. Wolpert and W. G. Macready, ‘No free lunch theorems for optimization,’ IEEE Trans. Evol. Comput., vol. 1, no. 1, pp. 67–82, Apr. 1997).”

In “Conservation of information” (2009), Dembski and Marks resoundingly demonstrate how conservation of information theorems indicate that even a moderately sized search requires problem-specific information to be successful.  The paper proves that any search algorithm performs, on average, as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure.

Throughout “Conservation of information” (2009), the paper discusses evolutionary algorithms at length:

“Christensen and Oppacher note the ‘sometimes outrageous claims that had been made of specific optimization algorithms.’ Their concern is well founded. In computer simulations of evolutionary search, researchers often construct a complicated computational software environment and then evolve a group of agents in that environment. When subjected to rounds of selection and variation, the agents can demonstrate remarkable success at resolving the problem in question.  Often, the claim is made, or implied, that the search algorithm deserves full credit for this remarkable success. Such claims, however, are often made as follows: 1) without numerically or analytically assessing the endogenous information that gauges the difficulty of the problem to be solved and 2) without acknowledging, much less estimating, the active information that is folded into the simulation for the search to reach a solution.” (Conservation of information, page 1058).

Dembski and Marks remind us that the concept Perakh is suggesting for evolutionary algorithms to outperform a blind search is the same scenario in the analogy of the proverbial monkeys typing on keyboards.

The monkeys at typewriters is a classic analogy to describe the chances of evolution being successful to achieve specified complexity.

A monkey at a typewriter is a good illustration for the viability of random evolutionary search.  Dembski and Marks run the calcs for good measure using factors of 27 (26 letter alphabet plus a space) and a 28 character message.  The answer is 1.59 × 1042, which is more than the mass of 800 million suns in grams.

In their Conclusion, Dembski and Marks state:

 “Endogenous information represents the inherent difficulty of a search problem in relation to a random-search baseline. If any search algorithm is to perform better than random search, active information must be resident. If the active information is inaccurate (negative), the search can perform worse than random. Computers, despite their speed in performing queries, are thus, in the absence of active information, inadequate for resolving even moderately sized search problems. Accordingly, attempts to characterize evolutionary algorithms as creators of novel information are inappropriate.” (Conservation of information, page 1059).

9.  THE DISPLACEMENT “PROBLEM”

This argument is based upon the claim by Dembski in page 202 of his book, “No Free Lunch, “ in which he states, “The significance of the NFL theorems is that an information-resource space J does not, and indeed cannot, privilege a target T.”  However, Perakh highlights a problem with Dembski’s statement because the NFL theorems contain nothing about any arising ‘information-resource space.’  If Dembski wanted to introduce this concept within the framework of the NFL theorems, then he should have at least shown what the role of an “information-resource space” is in view of the “black-box” nature of the algorithms in question.

On page 203 of No Free Lunch, Dembski introduces the displacement problem:

“… the problem of finding a given target has been displaced to the new problem of finding the information j capable of locating that target. Our original problem was finding a certain target within phase space. Our new problem is finding a certain j within the information-resource space J.”

Perakh adds that the NFL theorems are indifferent to the presence or absence of a target in a search, which leaves the “displacement problem,” with its constant references to targets, hanging in the air.

Dembski’s response is as follows:

What is the significance of the Displacement Theorem? It is this. Blind search for small targets in large spaces is highly unlikely to succeed. For a search to succeed, it therefore needs to be an assisted search. Such a search, however, resides in a target of its own. And a blind search for this new target is even less likely to succeed than a blind search for the original target (the Displacement Theorem puts precise numbers to this). Of course, this new target can be successfully searched by replacing blind search with a new assisted search. But this new assisted search for this new target resides in a still higher-order search space, which is then subject to another blind search, more difficult than all those that preceded it, and in need of being replaced by still another assisted search.  And so on. This regress, which I call the No Free Lunch Regress, is the upshot of this paper. It shows that stochastic mechanisms cannot explain the success of assisted searches.

“This last statement contains an intentional ambiguity. In one sense, stochastic mechanisms fully explain the success of assisted searches because these searches themselves constitute stochastic mechanisms that, with high probability, locate small targets in large search spaces. Yet, in another sense, for stochastic mechanisms to explain the success of assisted searches means that such mechanisms have to explain how those assisted searches, which are so effective at locating small targets in large spaces, themselves arose with high probability.  It’s in this latter sense that the No Free Lunch Regress asserts that stochastic mechanisms cannot explain the success of assisted searches.” [Searching Large Spaces: Displacement and the No Free Lunch Regress (2005)].

Perakh makes some valid claims.  About seven years later after the publication of Perakh’s book, Dembski provided updated calcs to the NFL theorems and his application of math to the displacement problem.  This is available for review in his paper, “The Search for a Search: Measuring the Information Cost of Higher Level Search” (2010).

Perakh discusses the comments made by Dembski to support the assertion CSI must be necessarily “smuggled” or “front-loaded” into evolutionary algorithms.  Perakh outright rejects Dembski’s claims, and proceeds to dismiss Dembski’s work on very weak grounds in what appears to be a hand-wave, begging the question as to how the CSI was generated in the first place, and overall circular reasoning.

Remember that the basis of the NFL theorems is to show that when CSI shows up in nature, it is only because it originated earlier in the evolutionary history of that population, and got smuggled into the genome of a population by regular evolution.   The CSI might have been front-loaded millions of years earlier in the biological ancestry.  The front-loading of the CSI may have occurred possibly in higher taxa.  Regardless from where the CSI originated, the claim by Dembski is that the CSI appears visually now because it was inserted earlier because evolutionary processes do not generate CSI.

The smuggling forward of CSI in the genome is called displacement.  The reason why the alleged law of nature called displacement occurs is because when applying Information Theory to identify CSI, the target of the search theorems is the CSI itself.  Dembski explains,

“So the search of the original space gets displaced to a search of an informational space in which the crucial information that constrains the search of the original space resides. I then argue that this higher-order informational space (‘higher’ with respect to the original search space) is always at least as big and hard to search as the original space.” (Evolution’s Logic of Credulity: An Unfettered Response to Allen Orr, 2012.)

It is important to understand what Dembski means by displacement here because Perakh distorts displacement to mean something different in this section.  Perakh asserts:

“An algorithm needs no information about the fitness function. That is how the ‘black-box’ algorithms start a search. To continue the search, an algorithm needs information from the fitness function. However, no search of the space of all possible fitness function is needed. In the course of a search, the algorithm extracts the necessary information from the landscape it is exploring. The fitness landscape is always given, and automatically supplies sufficient information to continue and to complete the search.” (Page 24)

To support these contentions, Perakh references Dawkins’s weasel algorithm for comparison.  The weasel algorithm, says Perakh, “explores the available phrases and selects from them using the comparison of the intermediate phrases with the target.” Perakh then argues the fitness function has in the weasel example the built-in information necessary to perform the comparison.  Perakh then concludes,

“This fitness function is given to the search algorithm; to provide this information to the algorithm, no search of a space of all possible fitness functions is needed and therefore is not performed.” (Emphasis in original, Page 24)

If Perakh is right, then the same is true for natural evolutionary algorithms. Having bought his own circular reasoning he then declares that his argument therefore renders Dembski’s “displacement problem” to be “a phantom.” (Page 24)

One of the problems with this argument is that Perakh admits that there is CSI, and offers no explanation as to how it originates and increases in the genome of a population that results in greater complexity.  Perakh is begging the question.  He offers no math, no algorithm, no calcs, no example.  He merely imposes his own properties of displacement upon the application, which is a strawman argument, and then shoots down displacement.  There’s no attempt to derive how the algorithm ever finds the target in the first place, which is disappointing given that Dembski provides the math to support his own claims.

Perakh appears to be convinced that evolutionary algorithmic searches taking place in the biological world are highly effective assisted searches that successfully locate target biological structures and functions.  And, as such, he is satisfied that these evolutionary algorithms can generate CSI. What Perakh needs to remember is that a genuine evolutionary algorithm is still a stochastic mechanism. The hypothetical success of the evolutionary algorithm says nothing about whether stochastic mechanisms are in turn responsible for bringing about those assisted searches.  Dembski explains,

“Evolving biological systems invariably reside in larger environments that subsume the search space in which those systems evolve. Moreover, these larger environments are capable of dramatically changing the probabilities associated with evolution as occurring in those search spaces. Take an evolving protein or an evolving strand of DNA. The search spaces for these are quite simple, comprising sequences that at each position select respectively from either twenty amino acids or four nucleotide bases. But these search spaces embed in incredibly complex cellular contexts. And the cells that supply these contexts themselves reside in still higher-level environments.” [Searching Large Spaces: Displacement and the No Free Lunch Regress (2005), pp. 31-32]

Dembski argues that the uniform probability on the search space almost never characterizes the system’s evolution, but instead it is a nonuniform probability that brings the search to a successful conclusion.  The larger environment brings upon the scenario the nonuniform probability.  Dembski notes that Richard Dawkins made the same point as Perakh in Climbing Mount Improbable (1996).  In that book, Dawkins argued that biological structures that at first appearance seem impossible with respect to the uniform probability, blind search, pure randomness, etc., become probable when the probabilities are reset by evolutionary mechanisms.

Propagation

This diagram shows propagation of active information
through two levels of the probability hierarchy.

The kind of search Perakh presents is also addressed in “The Search for a Search: Measuring the Information Cost of Higher Level Search” (2010).  The blind search Perakh complains of is that of uniform probability.  In this kind of problem, given any probability measure on Ω, Dembski’s calcs indicate the active entropy for any partition with respect to a uniform probability baseline will be nonpositive (The Search for a Search, page 477).  We have no information available about the search in Perakh’s example.  All Perakh gives us is that the fitness function is providing the evolutionary algorithm clues so that the search is narrowed.  But, we don’t know what that information is.  Perakh’s just speculating that given enough attempts, the evolutionary algorithm will get lucky and outperform the blind search.  Again, this describes uniform probability.

According to Dembski’s much intensified mathematical analysis, if no information about a search exists so that the underlying measure is uniform, which matches Perakh’s example, “then, on average, any other assumed measure will result in negative active information, thereby rendering the search performance worse than random search.” (The Search for a Search, page 477).

Dembski expands on the scenario:

“Presumably this nonuniform probability, which is defined over the search space in question, splinters off from richer probabilistic structures defined over the larger environment. We can, for instance, imagine the search space being embedded in the larger environment, and such richer probabilistic structures inducing a nonuniform probability (qua assisted search) on this search space, perhaps by conditioning on a subspace or by factorizing a product space. But, if the larger environment is capable of inducing such probabilities, what exactly are the structures of the larger environment that endow it with this capacity? Are any canonical probabilities defined over this larger environment (e.g., a uniform probability)? Do any of these higher level probabilities induce the nonuniform probability that characterizes effective search of the original search space? What stochastic mechanisms might induce such higher-level probabilities?  For any interesting instances of biological evolution, we don’t know the answer to these questions. But suppose we could answer these questions. As soon as we could, the No Free Lunch Regress would kick in, applying to the larger environment once its probabilistic structure becomes evident.” [Searching Large Spaces: Displacement and the No Free Lunch Regress (2005), pp. 32]

The probabilistic structure would itself require explanation in terms of stochastic mechanisms.  And, the No Free Lunch Regress blocks any ability to account for assisted searches in terms of stochastic mechanisms.  See “Searching Large Spaces: Displacement and the No Free Lunch Regress” (2005).

Today, Dembski has updated his theorems to present by supplying additional math and contemplations.  The NFL theorems today are analyzed in both a vertical and horizontal considerations in three-dimensional space.

3-D Geometry

3-D Geometric Application of NFL Theorems

This diagram shows a three dimensional simplex in {ω1, ω2, ω3}The numerical values of a1, a2 and a3 are one.  The 3-D box in the figure presents two congruent triangles in a geometric approach to presenting a proof of the Strict Vertical No Free Lunch Theorem.  In “The Search for a Search: Measuring the Information Cost of Higher Level Search” (2010), the NFL theorems are analyzed both horizontally and vertically.  The Horizontal NFL Theorem pertains to showing the average relative performance of searches never exceeds unassisted or blind searches.  The Vertical NFL Theorem shows that the difficulty of searching for a successful search increases exponentially with respect to the minimum allowable active information being sought.

This leads to the displacement principle, which holds that “the search for a good search is at least as difficult as a given search.”   Perakh might have raised a good point, but Dembski’s done the math and confirmed his theorems are correct.  Dembski’s math does work out, he’s provided the proofs, and shown the work.  On the other hand, Perakh merely offered an argument that was nothing but an unverified speculation with no calcs to validate his point.

V.  CONCLUSION

In the final section of this chapter, Perakh reiterates the main points throughout his article for review. He begins by saying,

“Dembski’s critique of Dawkins’s ‘targeted’ evolutionary algorithm fails to repudiate the illustrative value of Dawkins’s example, which demonstrates how supplementing random changes with a suitable law increases the rate of evolution by many orders of magnitude.” (Page 25)

No, this is a strawman.  There was nothing Perakh submitted to establish such a conclusion.  Neither Dembski or the Discovery Institute have any dispute with Darwinian mechanisms of evolution.  The issue is whether ONLY such mechanisms are responsible for specified complexity (CSI).  Intelligent Design proponents do not challenge that “supplementing random changes with a suitable law increases the rate of evolution by many orders of magnitude.”

Next, Perakh claims, “Dembski ignores Dawkins’s targetless’ evolutionary algorithm, which successfully illustrates spontaneous increase of complexity in an evolutionary process.” (Page 25).

No, this isn’t true.  First, Dembski did not ignore the Dawkins’ weasel algorithm.  Second, the weasel algorithm isn’t targetless.  We’re given the target up front.  We know exactly what it is.  Third, the weasel algorithm did not show any increase in specified complexity. All the letters in the sequence already existed. When evolution operates in the real biological world, the genome of the population is reshuffled from one generation to the next.  No new information is increasing leading to greater complexity.  The morphology is a result from the same information being rearranged.

In the case of the Weasel example, the target was already embedded in the original problem, just like one and only one full picture is possible to assemble from pieces of a jigsaw puzzle.  When the puzzle is completed, not one piece should be missing, unless one was lost, and there should not be one extra piece too many.  The CSI was the original picture that was cut up into pieces to be reassembled.  The Weasel example is actually a better illustration for front-loading.  All the algorithm had to do was figure out how to arrange the letters back into the proper intelligible sequence.

The CSI was specified in the target or fitness function up front to begin with.  The point of the NFL theorems indicates that if the weasel algorithm was a real life evolutionary example, then that complex specified information (CSI) would have been inputted into the genome of that population in advance.  But, the analogy quickly breaks down for many reasons.

Perakh then falsely asserts, “Contrary to Dembski’s assertions, evolutionary algorithms routinely outperform a random search.”  (Page 25). This is false.  Perakh speculated that this was a possibility, and Dembski clearly not only refuted it, but demonstrated that evolutionary algorithms essentially never outperform a random search.

Perakh next maintains:

“Contrary to Dembski assertion, the NFL theorems do not make Darwinian evolution impossible. Dembski’s attempt to invoke the NFL theorems to prove otherwise ignores the fact that these theorems assert the equal performance of all algorithms only if averaged over all fitness functions.” (Page 25).

No, there’s no such assertion by Dembski.  This is nonsense.  Intelligent Design proponents do not assert any false dichotomy.  ID Theory supplements evolution, providing the conjecture necessary to really explain the specified complexity.  Darwinian evolution still occurs, but it only explains inheritance and diversity.  It is ID Theory that explains complexity.  As far as the NFL theorems asserting the equal performance of all or any algorithms to solve blind searches, this is ridiculous and never was established by Perakh.

Perakh also claims:

“Dembski’s constant references to targets when he discusses optimization searches are based on his misinterpretation of the NFL theorems, which entail no concept of a target. Moreover, his discourse is irrelevant to Darwinian evolution, which is a targetless process.” (Page 25).

No, Dembski did not misinterpret the very NFL theorems that he invented.  The person that misunderstands and misrepresents them is Perakh.  It is statements like this that cause one to suspect of Perakh understands what CSI might be, either.  If you notice the trend in his writing, when Perakh looked for support for an argument, he referenced those who have authored rebuttals in opposition to Dembski’s work.  But, when Perakh looked for an authority to explain the meaning of Dembski’s work, Perakh nearly always cited Dembski himself.  Perakh never performs any math to support his own challenges.  Finally, Perakh ever established anywhere that Dembski misunderstood or misapplied any of the principles of Information Theory.

Finally, Perakh ends the chapter with this gem:

“The arguments showing that the anthropic coincidences do not require the hypothesis of a supernatural intelligence also answer the questions about the compatibility of fitness functions and evolutionary algorithms.” (Page 25).

This is a strawman.  ID Theory has nothing to do with the supernatural.  If it did, then it would not be a scientific theory by definition of science, which is bases upon empiricism.   As one can certainly see is obvious in this debate is that Intelligent Design theory is more aligned to Information Theory than most sciences.  ID Theory is not about teleology, but is more about front-loading.

William Dembski’s work is based upon pitting “design” against chance. In his book, The Design Inference he used mathematical theorems and formulas to devise a definition for design based upon a mathematical probability. It’s an empirical way to work with improbable complex information patterns and sequences. It’s called specified complexity, or aka complex specified information (CSI). There’s no contemplation as to the source of the information other than it being front-loaded.  ID Theory only involves a study of the information (CSI) itself. Design = CSI. We can study CSI because it is observable.

There is absolutely no speculation of any kind to suggest that the source of the information is by extraterrestrial beings or any other kind of designer, natural or non-natural. The study is only the information (CSI) itself — nothing else. There are several non-Darwinian conjectures as to how the information can develop without the need for designers.  Other conjectures are panspermia, natural genetic engineering, and what’s called “front-loading.”

In ID, “design” does not require designers. It can be equated to be derived from “intelligence” as per William Dembski’s book, “No Free Lunch,” but he uses mathematics to support his work, not metaphysics. The intelligence could be illusory. All the theorems detect is extreme improbability because that’s all the math can do. And, it’s called “Complex Specified Information.” It’s the Information that ID Theory is about. There’s no speculation into the nature of the intelligent source, assuming that Dembski was right in determining the source is intelligent in the first place. All it takes really is nothing other than a transporter of the information, which could be an asteroid, which collides with Earth carrying complex DNA in the genome of some unicellular organism. You don’t need a designer to validate ID Theory; ID has nothing to do with designers except for engineers and intelligent agents that are actually observable.

Posted in COMPLEX SPECIFIED INFORMATION (CSI) | Tagged , , , , , , , , | 2 Comments

COMPLEX SPECIFIED INFORMATION (CSI) – An Explanation of Specified Complexity

This entry is a sequel to my original blog essay on CSI, which was a more elementary discussion that can be reviewed here.   Complex Specified Information (CSI) is also called specified complexity.  CSI is a concept that is not original to Dr. William Dembski.  Specified Complexity was first noted in 1973 by Origin of Life researcher, Leslie Orgel:

Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity. [ L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added.]

Before beginning this discussion on CSI, it should be understood first as to why it is important.  The scientific theory of Intelligent Design (ID) is based upon important concepts, such as design, information, and complexity.  Design in the context of ID Theory is discussed in terms of CSI.  The following is the definition of ID Theory:

Intelligent Design Theory in Biology is the scientific theory that artificial intervention is a universally necessary condition of the first initiation of life, development of the first cell, and increasing information in the genome of a population leading to greater complexity evidenced by the generation of original biochemical structures.

Authorities:

* Official Discovery Institute definition: http://www.intelligentdesign.org/whatisid.php
* Stephen Meyer’s definition: http://www.discovery.org/v/1971
* Casey Luskin’s Discussion: http://www.evolutionnews.org/2009/11/misrepresenting_the_definition028051.html
* William Dembski’s definition: http://www.uncommondescent.com/id-defined

Please observe that this expression of ID Theory does not appeal to any intelligence or designer. Richard Dawkins was correct when he said that what is thought to be design is illusory. Design is defined by William Demski as Complex Specified Information (CSI).

“Intelligent Design” is an extremely anthropomorphic concept in itself.  The Discovery Institute does not work much with the term “intelligence.” The key to ID Theory is not in the term “intelligence,” but in William Dembski’s work in defining design. And, that is “Complex Specified Information” (CSI). It’s CSI that is the technical term that ID scientists work with. Dembski produced the equations, ran the calculations, and provided a scientifically workable method to determine whether some phenomenon is “designed.” According to Dembski’s book, “The Design Inference” (1998), CSI is based upon statistical probability.

CSI is based upon the theorem:

sp(E) and SP(E) —-> D(E)

When a small probably (SP) event (E) is complex, and

SP(E) = [P (E|I) < the Universal Probability Bound]. Or, in English, we know an event E is a small probably event when the probability of event E given I is less than the Universal Probability Bound. I = All relevant side information and all stochastic hypotheses. This is all in Dembski’s book, “The Design Inference.”

An event E is specified by a pattern independent of E, or expressed mathematically: sp(E). Upper case letters SP(E) are the small probability event we are attempting to determine is CSI, or designed. Lower case letters sp(E) are a prediction that we will discover the SP(E). Therefore, if sp(E) and SP(E) then —-> D(E). D(E) means the event E is not only small probability, but we can conclude it is designed.

Royal Flush

Dembski’s Universal Probability Bound = 0.5 x 10–150, or 0.5 times 10 to the exponent negative 150 power. This is the magic number when one can scientifically be justified to invoke design. It’s been said that using Dembski’s formula, the probability that Dembski states must be matched in order to ascribe design is to announce in advance before dealing that you are going to be dealt 24 Royal Flushes in a row, and then the event plays out just exactly as the advance forecast. In other words, just as intelligence might be entirely illusory, so likewise CSI is nothing other than a mathematical ratio that might not have anything in the world to do with actual design.

The probability of dealing a Royal Flush given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement is 649,739 to 1. According to Dembski, if someone were to be dealt a Royal Flush 24 times in a row upon the advance announcement predicting such a happening would take place, his contentions would be that it was so improbable that someone cheated, or “design” would have had to been involved.

(I should say parenthetically here that I am oversimplifying CSI just for the sake of making this point that we have already imposed upon the term “design” a technical definition that requires no intelligence or design as we understand the everyday normal use of the words. Therefore, for one to take issue with the ingenious marketing term “Intelligent Design” is meaningless because what label the theory is called is irrelevant.  Such a dispute on that issue is nothing other than haggling about nomenclature. The Discovery Institute could have labeled their product by any name. I would have preferred the title, “Bio-information Theory,” but the name is inconsequential.)

What is a very helpful concept to understand about CSI is that just as improbable as it is to be dealt a Royal Flush, so likewise the level of difficulty natural selection is up against to produce what appears to be designed in nature. And, when CSI is observed in nature, which occurs occasionally, then that not only confirms ID predictions, and defies Darwinian gradualism, but also tips a scientist a clue that such might be evidence of additional ID-related mechanisms at work.

It is true that William Dembski’s theorems are based upon an assumption that we can quantify everything in the universe; no argument there. But, he only used that logic to derive his Universal Probability Bound, which is nearly an infinitesimally small number: 0.5 x 10 –150 , or 0.5 times 10 to the exponent negative 150 power.  Do you not think that when a probability is this low that it is a safe bet to invoke corruption of natural processes by an intelligent agency? The number is a useful number. If anyone believes the ratio Dembski submitted is flawed, I would request that person offer a different number that they believe would more accurately eliminate chance in favor of design.

Design theorists are interested in studying complexity.  The more complex the information is, the better.  CSI is best understood as some kind of pattern that is so improbable that the chances of such a configuration occurring by sheer happenstance is extremely small. Dembksi’s formulas and theorems work best when there is an extreme low probability.

It is a given that we do not know everything in the universe, such as intangible variables as dark matter, where neutrinos go when they zip in and out of our universe, and cosmic background radiation. Dembski was aware of these unknown variables, it isn’t as if he ignored them when deriving his theorems. The Universal Probability Bound is not a perfectly absolute number, but it is very much a scientifically workable number no less credible than the variables used to work the equations in support of the Big Bang Theory. So, if one seeks to disqualify CSI on the sole basis that we do not know everything in the universe, then they just eliminated the Big Bang Theory as scientifically viable.

A religious person is welcome to invoke a teleological inference of a deity, but the moment one does that, they have departed from science and are entertaining religion.  CSI might or might not infer design. That’s the whole point of Dembski’s book, “The Design Inference.” In the book he expands on the meaning of CSI, and then proceeds to present his reasoning as to why CSI infers design. While those who reject ID Theory are seeing invisible designers hiding behind every tree, the point Dembski makes is we must first establish design to begin with.

The Delicate Arch in Utah.  Is this bridge a product of design?

The Delicate Arch in Utah is a natural bridge.  It is difficult to debate whether this is an example of specified complexity because some might argue the natural arch is “specified” in the sense that it is a natural bridge.

The arguments in favor that natural arches are specified would emphasize the meaningfulness and functionality of the monument as a bridge.  Also, just the mere fact that attention is be drawn to this particular natural formation is in-and-of-itself a form of specification.

Arguments in opposition that such a natural arch is specified would emphasize the fact that human experience has already observed geological processes are capable of producing such a natural formation.  Also, a natural arch is a one-of-a-kind structure where no two arches resemble each other to such detail that the identity of one could be mistaken for the other.  Finally, the concept emphasized by William Dembski is that specification relates to a prediction.  In other words, had someone drawn this arch in advance on a piece of paper even though they had never seen the actual monument, and then later the land formation is discovered in Utah, which happens to be an exact replica of the drawing, then the design theorists will declare the information is specified.

The meaning of the term specified is very important to understanding CSI.  The term “specified” in a certain sense either directly or indirectly refers to a prediction. If someone deals you a Royal Flush, the pattern would be complex. If you’re dealt a Royal Flush again several consecutive times, someone at the poker table is going to accuse you of cheating. The sequence now is increasing in improbability and complexity.  A Royal Flush is specified because it is a pattern that many people are aware of and have identified in advance.

Now, if you or the dealer ANNOUNCES IN ADVANCE that they are gong to deal you a Royal Flush, and sure enough it happens, that there is no longer any question that the target sequence was “specified.”

Dembski explains how the item of being specified might be best understood in discussing what he calls Conditionally independent patterns.  In applying his Explanatory Filter, Dembski states:

The patterns that in the presence of complexity or improbability implicate a designing intelligence must be independent of the event whose design is in question. Crucial here is that patterns not be artificially imposed on events after the fact. For instance, if an archer shoots arrows at a wall and we then paint targets around the arrows so that they stick squarely in the bull’s-eyes, we impose a pattern after the fact. Any such pattern is not independent of the arrow’s trajectory. On the other hand, if the targets are set up in advance (“specified”) and then the archer hits them accurately, we know it was not by chance but rather by design (provided, of course, that hitting the targets is sufficiently improbable). The way to characterize this independence of patterns is via the probabilistic notion of conditional independence. A pattern is conditionally independent of an event if adding our knowledge of the pattern to a chance hypothesis does not alter the event’s probability under that hypothesis. The specified in specified complexity refers to such conditionally independent patterns. These are the specifications.  [From: William A. Dembski, The Design Revolution: Answering the Toughest Questions About Intelligent Design (Downers Grove, IL: InterVarsity Press, 2004), 81.]

Mount Rushmore is an example of CSI because it relays information that is both complex and specified. More than just complexity of a hillside, it features the specified information of identifiable U.S. presidents.

Is Mount Rushmore a product of natural processes or an intelligent cause?  Most people would likely agree that this rock formation in Black Hills, South Dakota is the result of intelligent design.   I believe that an intelligent agent is responsible for this rock formation. This is based upon reasoning.  Notice that when you determined for yourself this monument is a deliberately sculptured work done by an intelligent cause (I assume you did so) that you did not draw upon a religious view to arrive at that conclusion.

 

The crevices and natural coloration of the rock at Eagle Rock, California portray a remarkable illusion to an eagle in flight. 

Snowflakes are also considered when contemplating CSI.  Snowflakes are very complex, and appear to also be specified.  However, in spite of the great detail, recognizable pattern, and beauty of a snowflake, no two snowflakes are alike.  A snowflake would be specified if a second one were to be found that identically matched the first.

Snowflake1

Snowflake2Snowflake3

 The shapes of snow crystals are due to the laws of physics, which determines their regular geometric 6-pointed pattern.  As such, a snowflake has no CSI whatever because snowflakes are produced by natural processes.  The snowflake is complex, but not complex specified information.  Meteorological conditions also are a factor in the shape formation of a snow crystal.  So, snow is a product of both physical laws and chance.  There’s one thing to note about snow crystals.  Due to the fact that they form from atmospheric conditions governed by laws of physics, the complexity is noted, but they still have a degree of simplicity to them in spite of the infinite number of configurations they might appear as in shape.

William Dembski has been called on snowflakes in the past by his critics who see snowflakes to be just as every bit complex as other simple objects that are known to be designed.   It is true that the complexity of snow crystals make them good candidates for evidence of design.  This is why the concept of being specified is so important.  As intricate as details there might be found with snow, it is the lack of specificity that causes snow crystals to not be CSI.  The shortcut way to test whether snowflakes are designed would be to find two snowflakes that are identical.  The probability for one snowflake to exist is 1 to 1.  It’s the fact that the probability is small for the identical replica to reoccur a second time that would be evidence of design. This is what is meant by being specified.  Specification in the context of ID Theory is not mere intricacy of detailed patterns alone. 

While some ID critics believe snowflakes refute Dembski’s Explanatory Filter because they consider the extreme low probability to infer that snowflakes are designed, I see it as just the opposite.  It is the fact that we know as a given snowflakes are not designed that should lend us confidence in the cases which the Explanatory Filter determines some feature is designed.

This brings up an important point about CSI.  There are many instances where information is highly complex and appears to be specified as well, such as snowflakes.   Information can be arranged in various different degrees of complexity and specificity.   Yet, it is only CSI when the improbability reaches the Universal Probability Bound.  Then what would we call something that just looks like CSI, but it isn’t CSI because a pattern is determined not to be designed upon application of Dembski’s Explanatory Filter?   When CSI just looks like it might be CSI, but it actually isn’t, just like snowflakes, then Dembski calls this Specificational complexity

Dembski explains:

Because they are patterns, specifications exhibit varying degrees of complexity. A specification’s degree of complexity determines how many specificational resources must be factored in when gauging the level of improbability needed to preclude chance (see the previous point). The more complex the pattern, the more specificational resources must be factored in. The details are technical and involve a generalization of what mathematicians call Kolmogorov complexity. Nevertheless, the basic intuition is straightforward. Low specificational complexity is important in detecting design because it ensures that an event whose design is in question was not simply described after the fact and then dressed up as though it could have been described before the fact.

To see what’s at stake, consider the following two sequences of ten coin tosses: HHHHHHHHHH and HHTHTTTHTH. Which of these would you be more inclined to attribute to chance? Both sequences have the same probability, approximately 1 in 1,000. Nevertheless, the pattern that specifies the first sequence is much simpler than the second. For the first sequence the pattern can be specified with the simple statement “ten heads in a row.” For the second sequence, on the other hand, specifying the pattern requires a considerably longer statement, for instance, “two heads, then a tail, then a head, then three tails, then heads followed by tails and heads.” Think of specificational complexity (not to be confused with specified complexity) as minimum description length. (For more on this, see <http://www.mdl-research.net>.)

For something to exhibit specified complexity it must have low specificational complexity (as with the sequence HHHHHHHHHH, consisting of ten heads in a row) but high probabilistic complexity (i.e., its probability must be small). It’s this combination of low specificational complexity (a pattern easy to describe in relatively short order) and high probabilistic complexity (something highly unlikely) that makes specified complexity such an effective triangulator of intelligence. But specified complexity’s significance doesn’t end there.

Besides its crucial place in the design inference, specified complexity has also been implicit in much of the self-organizational literature, a field that studies how complex systems emerge from the structure and dynamics of their parts. Because specified complexity balances low specificational complexity with high probabilistic complexity, specified complexity sits at that boundary between order and chaos commonly referred to as the “edge of chaos.” The problem with pure order (low specificational complexity) is that it is predictable and thus largely uninteresting. An example here is a crystal that keeps repeating the same simple pattern over and over. The problem with pure chaos (high probabilistic complexity) is that it is so disordered that it is also uninteresting. (No meaningful patterns emerge from pure chaos. An example here is the debris strewn by a tornado or avalanche.) Rather, it’s at the edge of chaos, neatly ensconced between order and chaos, that interesting things happen. That’s where specified complexity sits. [From: William A. Dembski, The Design Revolution: Answering the Toughest Questions About Intelligent Design (Downers Grove, IL: InterVarsity Press, 2004), 81.]

THAT’S WHY I NEVER WALK IN THE FRONT!

This Far Side cartoon is an illustration that Michael Behe uses in his lectures to demonstrate that we can often deduce design.  Even though there is no evidence of any trappers in sight, the scene is obvious that the snare was intentionally designed, as such machinery does not naturally exist by sheer happenstance.  Behe also makes the point that there was no religious contemplation required to conclude that someone deliberately set this trap even though the source that assembled the machine is absent.

This next image is extremely graphic, but illustrates the same point.  Here, a Blue Duiker as been trapped in a snare.

Sometimes people just can’t decide whether a formation of information is a result of happenstance or intelligence.  A perfect example of what looks like design but might not be is the monuments on Mars.   Are the monuments on Mars caused by chance or design?  Are these formations on the planet natural processes or artificial?  Here’s some more interesting images that help someone better understand CSI in a simple way.

It’s also interesting to note that those antagonists who so quickly scoff at ID because of the unfair inference of designers are automatically conceding design as a given. The teleological inference works both ways, if design points to a designer, then designers require design. Without design, a designer does not exist.

As such, if one desires to oppose ID Theory, a preferable argument would be to insist design does not appear in nature, and abandon the teleological inference.

Here’s more on Complex Specified Information (CSI):

* From Discovery Institute, http://www.ideacenter.org/contentmgr/showdetails.php/id/832

* By Dembski himself, http://www.arn.org/docs/dembski/wd_nfl_intro.htm

William Dembski’s book, “The Design Inference” (https://www.barnesandnoble.com/w/design-inference-william-a-dembski/1100942106).   The Discovery Institute has written CSI here and here.

Darwinian mechanisms (which are based upon chance) will most likely not be the cause of CSI because CSI is by definition a small probability event.  CSI is not zero probability, it is small probability.  There is still a possibility that Darwinian mechanisms could produce CSI, but CSI is more likely to be caused by something that replaces the element of chance.  Darwinian mechanisms are based upon chance.  CSI is a low probability ratio that exposes the absence of chance.  Whatever the absent of chance is (call it intelligence, design, artificial interruption, quantum theory, an asteroid, abiogenesis, RNA self-replication, unknown molecular pre-life molecular configuration, epigenetics, or whatever) is the most likely cause of CSI.  As such, it is assume by ID scientists that CSI = design.

In another book written by Dembski, “Why Specified Complexity Cannot Be Purchased without Intelligence,” Dembski explains why he thinks that CSI is also linked with intelligence.   He further discusses his views here.

CSI is a mathematical ratio of probability that exposes a small probability event that reduces chance from the equation.  And, regardless of what you substitute to replace in the vacuum, the ID scientists substitute the design.  So, in ID Theory, whenever the word “design” appears, it means CSI.  And, therefore it is ridiculous and false to impose designers into the context of ID Theory because the ID definition of design is none other than CSI.

The point is that ID scientists define design as CSI.  Therefore, skeptics of ID Theory should cease invoking designers because all design means in terms of ID Theory is the mathematical absence of chance, which is mathematically expressed in terms of a low probability ratio.

CSI is an assumption, not an argument.  CSI is an axiom postulated up front based upon mathematical theorems.  It’s all couched in math.  Unless the small probably ratio reaches zero, then no one working out the calculations is going to say “cannot.”  CSI is assumed to be design, and it is assumed natural causes DON’T generate CSI because CSI by definition is a small probability event that favors the absence of chance.

We cannot be certain the source is an intelligent agency.  CSI is based upon probabilities.  There are many who credit Darwinian evolution to be the source of complexity.  This is illogical when running the design theorem calculations.  But, it is not impossible.  As Richard Dawkins has noted before, design can be illusory.  The hypothesis that Darwinian evolution is a cause for some small probability event SP (E) could be correct, but it is highly improbable according to the math.

One common demonstration to help people understand how CSI works is to take a letter sequence. This can be done with anything, but the common example is this pattern:

METHINKS•IT•IS•LIKE•A•WEASEL

This letter arrangement is used the most often to describe CSI because the math has already been worked out. The bullets represent spaces. There are 27 possibilities at each location in a symbol string 28 characters in length. If natural selection were entirely random it would take 1 x 1040 (that’s 10 to the 40th exponent, or 1 with 40 zeroes to get to the decimal point). It’s a small probability. However, natural selection (NS) is smarter than that, and Richard Dawkins has shown how NS achieves similar odds in nature in an impressive 43 attempts to get the answer correct (http://evoinfo.org/papers/ConsInfo_NoN.pdf).

In this example, the probability was only 1 x 1040. CSI is an even much higher number than that. If you take a pattern or model, such as METHINKS•IT•IS•LIKE•A•WEASEL, and you keep adding the information, you soon reach probabilities that are within the domain of CSI.

For more of Dembski’s applications using his theorems, you might like to reference these papers:

http://evoinfo.org/papers/2010_TheSearchForASearch.pdf

http://marksmannet.com/RobertMarks/REPRINTS/2010-EfficientPerQueryInformationExtraction.pdf

This is a continuation of Claude Shannon’s work. One of the most important contributors to ID Theory is American mathematician Claude Shannon (http://en.wikipedia.org/wiki/Claude_Shannon), who is considered to be the father of Information Theory (http://en.wikipedia.org/wiki/Information_Theory). Essentially, ID Theory is a sub-theory of Information Theory in the field of Bioinformatics. This is one of Dembski’s areas of expertise, http://evoinfo.org/publications/.

When Robert Deyes wrote a review on Stephen Meyer’s “Signature In The Cell,” he noted “When talking about ‘information’ and its relevance to biological design, Intelligent Design theorists have a particular definition in mind.”   Stephen Meyer explained in “Signature In The Cell” that information is: “the attribute inherent in and communicated by alternative sequences or arrangements of something that produce specific effects” (p.86).

Shannon was instrumental in the development of computer science. He invented the first robotic mouse, and computer chess. When Shannon unveiled his theory for quantifying information it included several axioms, one of which is information is inversely proportional to uncertainty. Similarly, design can be contrasted to chance.

It was Peter Medawar that referenced these theorems as “The Law of Conservation of Information.” Dembski’s critics have accused his applications as being too modified to be associated with the Law of Conservation of Information. There is no dispute that Dembski applied the modifications. He has modified them to apply to biology.

FROM UNCOMMON DESCENT GLOSSARY:

The Uncommon Descent blog further notes the following in re CSI:

The concept of complex specified information helps us understand the difference between (a) the highly informational, highly contingent aperiodic functional macromolecules of life and (b) regular crystals formed through forces of mechanical necessity, or (c) random polymer strings. In so doing, they identified a very familiar concept — at least to those of us with hardware or software engineering design and development or troubleshooting experience and knowledge. Furthermore, on massive experience, such CSI reliably points to intelligent design when we see it in cases where we independently know the origin story.

What Dembski did with the CSI concept starting in the 1990′s was to:

(i) recognize CSI’s significance as a reliable, empirically observable sign of intelligence,

(ii) point out the general applicability of the concept, and

(iii) provide a probability and information theory based explicitly formal model for quantifying CSI.

(iv) In the current formulation, as at 2005, his metric for CSI, χ (chi), is:

χ = – log2[10 –120 ·ϕS(T)·P(T|H)]

P(T|H)is the probability of being in a given target zone in a search space, on a relevant chance hypothesis, (E.g. Probability of a hand of 13 spades form a shuffled standard deck of cards)

ϕS(T)is a multiplier based on the number of similarly simply and independently specifiable targets (e.g. having hands that are all Hearts, all Diamonds, all Clubs or all Spades)

10–120 is the Seth Lloyd estimate for the maximum number of elementary bit-based operations possible in our observed universe, serving as a reasonable upper limit on the number of search operations.

log2 [ . . . ] converts the modified probability into a measure of information in binary digits, i.e. specified bits. When this value is at least + 1, then we may reasonably infer to the presence of design from the evidence of CSI alone. (For the example being discussed, χ = -361, i.e. The odds of 1 in 635 billions are insufficient to confidently infer to design, on the gamut of the universe as a whole. But, on the gamut of a card game here on Earth, that would be a very different story.) http://www.uncommondescent.com/glossary/

FSCI — “functionally specified complex information” (or, “function-specifying complex information” or — rarely — “functionally complex, specified information” [FCSI])) is a commonplace in engineered systems: complex functional entities that are based on specific target-zone configurations and operations of multiple parts with large configuration spaces equivalent to at least 500 – 1,000 bits; i.e. well beyond the Dembski-type universal probability bound.

In the UD context, it is often seen as a descriptive term for a useful subset of CSI first identified by origin of life researchers in the 1970s – 80′s. As Thaxton et al summed up in their 1984 technical work that launched the design theory movement, The Mystery of Life’s Origin:

. . . “order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity.” [TMLO (FTE, 1984), Ch 8, p. 130.]

So, since in the cases of known origin such are invariably the result of design, it is confidently but provisionally inferred that FSCI is a reliable sign of intelligent design. http://www.uncommondescent.com/glossary/

Posted in COMPLEX SPECIFIED INFORMATION (CSI) | Tagged , , , , , , | 2 Comments