Thursday, April 26, 2012

A note to reviewers of papers by Dembski and Marks

William A. Dembski and Robert J. Marks II lace their engineering papers with subtle insinuations that will strike reviewers as somewhat strange, but that probably will not raise red flags. The only publication in which they give a crystal-clear explanation of their measure of active information, and state outright what they're trying to do with it, is the somewhat philosophical Life's Conservation Law: Why Darwinian Evolution Cannot Create Biological Information. Note that they previously referred to "English's Law of Conservation of Information" (a term they made up). English is telling you now that he did not understand their engineering papers until he read the one addressing biological evolution.

ABSTRACT: Laws of nature are universal in scope, hold with unfailing regularity, and receive support from a wide array of facts and observations. The Law of Conservation of Information (LCI) is such a law. LCI characterizes the information costs that searches incur in outperforming blind search. Searches that operate by Darwinian selection, for instance, often significantly outperform blind search. But when they do, it is because they exploit information supplied by a fitness function — information that is unavailable to blind search. Searches that have a greater probability of success than blind search do not just magically materialize. They form by some process. According to LCI, any such search-forming process must build into the search at least as much information as the search displays in raising the probability of success. More formally, LCI states that raising the probability of success of a search by a factor of q/p (> 1) incurs an information cost of at least log(q/p). LCI shows that information is a commodity that, like money, obeys strict accounting principles. This paper proves three conservation of information theorems: a function-theoretic, a measure-theoretic, and a fitness-theoretic version. These are representative of conservation of information theorems in general. Such theorems provide the theoretical underpinnings for the Law of Conservation of Information. Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms. [emphasis added]

You do not have to read far into the paper to find that intelligence creates information to guide biological evolution. The passage I've highlighted contradicts the Conservation Lemma (wish I hadn't called it that) I proved in my first paper (1996) regarding "no free lunch" in so-called search. The fundamental reason that there is no free lunch is that the "search" (which is nothing more than sampling, with performance measured on the sample) cannot gain exploitable information by evaluation of the fitness function. This is really just a formalization of the famous problem of induction, i.e., observations say nothing about what has yet to be observed. Use of observations to decide what to observe is a source of sampling bias, not information. Therefore, when the performance measured on a sample obtained by biased sampling is better or worse than the expected performance for uniform sampling ("blind search"), the difference can be explained only in terms of bias. I'll say much more in a forthcoming post.

You will not read all of the paper, and thus I want to call your attention to the 1-1/3 page "Conclusion: 'A Plan for Experimental Validation.'" Some highlights:

The Law of Conservation of Information, however, is not merely an accounting tool. Under its aegis, intelligent design merges theories of evolution and information, thereby wedding the natural, engineering, and mathematical sciences. On this view (and there are other views of intelligent design), its main focus becomes how evolving systems incorporate, transform, and export information. Moreover, a principal theme of its research becomes teasing apart the respective roles of internally produced and externally applied information in the performance of evolving systems.


In such information-tracking experiments, the opponent of intelligent design hopes to discover a free lunch. The proponent of intelligent design, by contrast, attempts to track down hidden information costs and thereby confirm that the Law of Conservation of Information was preserved. There is no great mystery in any of this. Nor do such experiments to confirm intelligent design merely apply to the origin of life. Insofar as evolution (whether chemical or biological) is an exact experimental science, it will exhibit certain informational properties. Are those properties more akin to alchemy, where more information comes out than was put in? Or are they more akin to accounting, where no more information comes out than was put in? A systematic attempt to resolve such questions constitutes a plan for experimentally verifying intelligent design.

All of the published "information-tracking experiments" have all been analyses of evolutionary computations. (My next post shows that the "information" is nothing but logarithmically transformed performance, and that the misinterpretation is rooted in Marks' misunderstanding of the "no free lunch" theorems.) The highlighted passage indicates how Dembski and Marks will argue, perhaps as expert witnesses in the next judicial test of public-school instruction in "intelligent design" creationism (Dembski was to serve as a witness in the last, but withdrew), that their engineering/computing publications support the claim that biological evolution requires intelligent guidance.

This is in no way a suggestion that you respond to anything but the technical (de)merits of their work. Dembski himself referred a New York Times science reporter to me as a fair-minded critic of ID creationism. I have also protested what I considered to be an infringement of Marks' academic freedom at Baylor University. My intent here is to impress on you how important it is to do a thorough review, and to insist that the authors make clear to you everything that they are doing. In particular, require that they provide a rigorous definition of "search," rather than give examples or suggest that everyone knows what the term means. If the definition does not make "search" out to be sampling, with performance measured on the sample (as in Wolpert and Macready [1997], "No Free Lunch Theorems for Optimization"), then you should ask why it does not.

No comments :

Post a Comment