Specification: The Pattern That Signifies Intelligence
Specification: The Pattern That Signifies Intelligence. By William A. Dembski. August 15, 2005, version 1.22 [PDF].
In: the Mathematical Foundations of Intelligent Design.
From the Abastract:
"Specification denotes the type of pattern that highly improbable events must exhibit before one is entitled to attribute them to intelligence. This paper analyzes the concept of specification and shows how it applies to design detection... the fundamental question of Intelligent Design (ID) [is]: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?"
From the Full Text:
1. Specification as a Form of Warrant
"...For [Alvin] Plantinga, warrant is what turns true belief into knowledge."
"It is not enough merely to believe that something is true; there also has to be something backing up that belief."
"...specification functions like Plantinga’s notion of warrant..."
"...specification is what must be added to highly improbable events before one is entitled to attribute them to design."
"... specification constitutes a probabilistic form of warrant, transforming the suspicion of design into a warranted belief in design... a systematic procedure for sorting through circumstantial evidence for design."
"... You are walking outside and find an unusual chunk of rock. You suspect it might be an arrowhead. Is it truly an arrowhead, and thus the result of design, or is it just a random chunk of rock... we have no direct knowledge of any putative designer and no direct knowledge of how such a designer, if actual, fashioned the item in question. All we see is the pattern exhibited by the item..."
2. Fisherian Significance Testing
"... In Fisher’s approach to testing the statistical significance of hypotheses, one is justified in rejecting (or eliminating) a chance hypothesis provided that a sample falls within a prespecified rejection region (also known as a critical region)... In Fisher’s approach, if the coin lands ten heads in a row, then one is justified rejecting the chance hypothesis."
"... In the applied statistics literature, it is common to see significance levels of .05 and .01. The problem to date has been that any such proposed significance levels have seemed arbitrary, lacking “a rational foundation.”"
"... significance levels cannot be set in isolation but must always be set in relation to the probabilistic resources relevant to an event’s occurrence."
"... essentially, the idea is to make a target so small that an archer is highly unlikely to hit it by chance..."
"Rejection regions eliminate chance hypotheses when events that supposedly happened in accord with those hypotheses fall within the rejection regions."
"... the probability of getting 100 heads in a row is roughly 1 in 10^30, which is drastically smaller than 1 in 9 million. Within Fisher’s theory of statistical significance testing, a prespecified event of such small probability is enough to disconfirm the chance hypothesis."
3. Specifications via Probability Densities
"... within Fisher’s approach to hypothesis testing the probability density function f is used to identify rejection regions that in turn are used to eliminate chance... [function f is nonnegative]... Since f cannot fall below zero, we can think of the landscape as never dipping below sea-level."
"In this last example we considered extremal sets... at which the probability density function concentrates minimal probability."
"... Although the combinatorics involved with the multinomial distribution are complicated (hence the common practice of approximating it with continuous probability distributions like the chi-square distribution), the reference class of possibilities Omega, though large, is finite... [and its cardinality, i.e., the number of its elements, is well-defined (its order of magnitude is around 10^33)]"
4. Specifications via Compressibility
"... The problem algorithmic information theory [the Chaitin-Kolmogorov-Solomonoff theory] seeks to resolve is this: Given probability theory and its usual way of calculating probabilities for coin tosses, how is it possible to distinguish these sequences in terms of their degree of randomness?"
"Probability theory alone is not enough."
"... Chaitin, Kolmogorov, and Solomonoff supplemented conventional probability theory with some ideas from recursion theory, a subfield of mathematical logic that provides the theoretical underpinnings for computer science... a string of 0s [zeroes] and 1s [ones] becomes increasingly random as the shortest computer program that generates the string increases in length."
"For the moment, we can think of a computer program as a short-hand description of a sequence of coin tosses."
Thus, the sequence (N):
11111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111
is not very random because it has a very short description, namely,
repeat ‘1’ a hundred times."
"... we are interested in the shortest descriptions since any sequence can always be described in terms of itself."
"The sequence (H):
11111111111111111111111111111111111111111111111111
00000000000000000000000000000000000000000000000000
is slightly more random than (N) since it requires a longer description, for example,
repeat ‘1’ fifty times, then repeat ‘0’ fifty times."
"... the sequence (A):
10101010101010101010101010101010101010101010101010
10101010101010101010101010101010101010101010101010
has a short description,
repeat ‘10’ fifty times."
"The sequence (R) [see below], on the other hand, has no short and neat description (at least none that has yet been discovered). For this reason, algorithmic information theory assigns it a higher degree of randomness than the sequences (N), (H), and (A)."
"Since one can always describe a sequence in terms of itself, (R) has the description"
copy ‘11000011010110001101111111010001100011011001110111
00011001000010111101110110011111010010100101011110’.
"Because (R) was constructed by flipping a coin, it is very likely that this is the shortest description of (R)."
"It is a combinatorial fact that the vast majority of sequences of 0s and 1s have as their shortest description just the sequence itself."
"... most sequences are random in the sense of being algorithmically incompressible."
"... the collection of nonrandom sequences has small probability among the totality of sequences so that observing a nonrandom sequence is reason to look for explanations other than chance."
"... Kolmogorov even invoked the language of statistical mechanics to describe this result, calling the random sequences high entropy sequences, and the nonrandom sequence low entropy sequences."
"... the collection of algorithmically compressible (and therefore nonrandom) sequences has small probability among the totality of sequences, so that observing such a sequence is reason to look for explanations other than chance."
5. Prespecifications vs. Specifications
"... specifications are patterns delineating events of small probability whose occurrence cannot reasonably be attributed to chance."
"... we did see some clear instances of patterns being identified after the occurrence of events and yet being convincingly used to preclude chance in the explanation of those events (cf. the rejection regions induced by probability density functions as well as classes of highly compressible bit strings...) ... for such after-the-event patterns, some additional restrictions needed to be placed on the patterns to ensure that they would convincingly eliminate chance..."
"... before-the-event patterns, which we called prespecifications, require no such restrictions."
"... prespecified events of small probability are very difficult to recreate by chance."
"It’s one thing for highly improbable chance events to happen once. But for them to happen twice is just too unlikely. The intuition here is the widely accepted folk-wisdom that says “lightning doesn’t strike twice in the same place.” "
"... this intuition is taken quite seriously in the sciences. It is, for instance, the reason origin-of-life researchers tend to see the origin of the genetic code as a one-time event. Although there are some variations, the genetic code is essentially universal. Thus, for the same genetic code to emerge twice by undirected material mechanisms would simply be too improbable."
"With the sequence (R, see above) treated as a prespecification, its chance occurrence is not in question but rather its chance reoccurrence..."
"... specifications... patterns that nail down design and therefore that inherently lie beyond the reach of chance."
"Are there patterns that, if exhibited in events, would rule out their original occurrence by chance?"
"... the answer is yes, consider the following sequence (again, treating “1” as heads and “0” as tails; note that the designation pseudo-R here is meant to suggest pseudo-randomness):
(pseudo-R)
01000110110000010100111001011101110000000100100011
01000101011001111000100110101011110011011110111100."
"... how will you determine whether (pseudo-R) happened by chance?"
"One approach is to employ statistical tests for randomness."
"... to distinguish the truly random from the pseudo-random sequences. In a hundred coin flips, one is quite likely to see six or seven ... repetitions [see Note 21]"
[Note 21: “The proof is straightforward: In 100 coin tosses, on average half will repeat the previous toss, implying about 50 two-repetitions. Of these 50 two-repetitions, on average half will repeat the previous toss, implying about 25 three-repetitions. Continuing in this vein, we find on average 12 four-repetitions, 6 five-repetitions, 3 six-repetitions, and 1 seven-repetition. See Ivars Peterson, The Jungles of Randomness: A Mathematical Safari (New York: Wiley, 1998), 5.]
"On the other hand, people concocting pseudo-random sequences with their minds tend to alternate between heads and tails too frequently."
"Whereas with a truly random sequence of coin tosses there is a 50 percent chance that one toss will differ from the next, as a matter of human psychology people expect that one toss will differ from the next around 70 percent of the time."
"... after three or four repetitions, humans trying to mimic coin tossing with their minds tend to think its time for a change whereas coins being tossed at random suffer no such misconception."
"... (R) resulted from chance because it represents an actual sequence of coin tosses. What about (pseudo-R)?"
"... (pseudo-R) is anything but random. To see this, rewrite this sequence by inserting vertical strokes as follows:
(pseudo-R)
01000110110000010100111001011101110000000100100011
01000101011001111000100110101011110011011110111100
""By dividing (pseudo-R) this way it becomes evident that this sequence was constructed simply by writing binary numbers in ascending lexicographic order, starting with the one-digit binary numbers (i.e., 0 and 1), proceeding to the two-digit binary numbers (i.e., 00, 01, 10, and 11), and continuing until 100 digits were recorded."
"... (pseudo-R), when continued indefinitely, is known as the Champernowne sequence and has the property that any N-digit combination of bits appears in this sequence with limiting frequency 2^-N."
“D. G. Champernowne identified this sequence back in 1933."
"The key to defining specifications and distinguishing them from prespecifications lies in understanding the difference between sequences such as (R) and (pseudo-R)."
"The coin tossing events signified by (R) and (pseudo-R) are each highly improbable [to be exactly reproduced]."
"... (R), for all we know [did] arise by chance whereas (pseudo-R) cannot plausibly be attributed to chance."
6. Specificity
"The crucial difference between (R) and (pseudo-R) is that (pseudo-R) exhibits a simple, easily described pattern whereas (R) does not."
"To describe (pseudo-R), it is enough to note that this sequence lists binary numbers in increasing order."
"By contrast, (R)cannot, so far as we can tell, be described any more simply than by repeating the sequence."
"Thus, what makes the pattern exhibited by (pseudo-R) a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance."
"It’s this combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance) that makes the pattern exhibited by (pseudo-R) — but not (R) — a specification [see Note 23]"
[Note 23: It follows that specification is intimately connected with discussions in the self-organizational literature about the “edge of chaos,” in which interesting self-organizational events happen not where things are completely chaotic (i.e., entirely chance-driven and thus not easily describable)... See Roger Lewin, Complexity: Life at the Edge of Chaos, 2nd ed. (Chicago: University of Chicago Press, 2000)]
"... “displaying an even face” describes a pattern that maps onto a (composite) event in which a die lands either two, four, or six."
"... If “bidirectional,” “rotary,” “motor-driven,” and “propeller” are basic concepts, then the molecular machine known as the bacterial flagellum can be characterized as a 4-level concept of the form “bidirectional rotary motor-driven propeller.”
"Now, there are approximately N = 10^20 concepts of level 4 or less, which therefore constitute the specificational resources relevant to characterizing the bacterial flagellum."
"... Hitting large targets by chance is not a problem. Hitting small targets by chance can be."
"... putting the logarithm to the base 2... has the effect of changing scale and directionality, turning probabilities into number of bits and thereby making the specificity a measure of information... This logarithmic transformation therefore ensures that the simpler the patterns and the smaller the probability of the targets they constrain, the larger specificity."
"To see that the specificity so defined corresponds to our intuitions about specificity in general, think of the game of poker and consider the following three descriptions of poker hands: ... the probability of one pair far exceeds the probability of a full house which, in turn, far exceeds the probability of a royal flush. Indeed, there are only 4 distinct royal-flush hands but 3744 distinct full-house hands and 1,098,240 distinct single-pair hands... when we take the negative logarithm to the base 2, the specificity associated with the full house pattern will be about 10 less than the specificity associated with the royal flush pattern. Likewise, the specificity of the single pair pattern will be about 10 less than that of the full house pattern."
"... consider the following description of a poker hand: “four aces and the king of diamonds.” ...this description is about as simple as it can be made. Since there is precisely one poker hand that conforms to this description, its probability will be one-fourth the probability of getting a royal flush."
"... specificity... includes not just absolute specificity but also the cost of describing the pattern in question. Once this cost is included, the specificity of “royal flush” exceeds than the specificity of “four aces and the king of diamonds.”
7. Specified Complexity
"... the following example from Dan Brown’s... The Da Vinci Code. The heroes, Robert Langdon and Sophie Neveu, find themselves in an ultra-secure, completely automated portion of a Swiss bank (“the Depository Bank of Zurich”). Sophie’s grandfather, before dying, had revealed the following ten digits separated by hyphens: 13-3-2-21-1-1-8-5
Ref. 30 "For the mathematics of the Fibonacci sequence, see G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers, 5th ed. (Oxford: Clarendon Press, 1979), 148–153. For an application of this sequence to biology, see Ian Stewart, Life’s Other Secret: The New Mathematics of the Living World (New York: Wiley, 1998), 122–132."]
"Robert and Sophie punch in the Fibonacci sequence 1123581321 and retrieve the crucial information they are seeking."
"This sequence, if produced at random (i.e., with respect to the uniform probability distribution denoted by H), would have probability 10^-10, or 1 in 10 billion."
"... for typical ATM cards, there are usually sixteen digits, and so the probability is typically on the order of 10^-15 (not 10^-16 because the first digit is usually fixed; for instance, Visa cards all begin with the digit “4”)."
"This bank wants only customers with specific information about their accounts to be able to access those accounts. It does not want accounts to be accessed by chance."
"... If, for instance, account numbers were limited to three digits, there would be at most 1,000 different account numbers, and so, with millions of users, it would be routine that accounts would be accessed accidentally"
"Theoretical computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed... This number sets an upper limit on the number of agents that can be embodied in the universe and the number of events that, in principle, they can observe."
Accordingly, for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity of T given H (minus the tilde and context sensitivity) as [From Note 46, “Lloyd’s approach [10^120] is more elegant and employs deeper insights into physics. In consequence, his approach yields a more precise estimate for the universal probability bound.”]
"... it is enough to note two things:
(1) there is never any need to consider replicational resources M·N that exceed 10^120 (say, by invoking inflationary cosmologies or quantum many-worlds) because to do so leads to a wholesale breakdown in statistical reasoning, and that’s something no one in his saner moments is prepared to do (for the details about the fallacy of inflating one’s replicational resources beyond the limits of the known, observable universe, see my article (in PDF) “The Chance of the Gaps”[Dembski, W.A. “The Chance of the Gaps,” in Neil Manson, ed., God and Design: The Teleological Argument and Modern Science (London: Routledge, 2002), 251–274.])
(2) ... the elimination of chance only requires a single semiotic agent who has discovered the pattern in an event that unmasks its non-chance nature. Recall the Champernowne sequence discussed in sections 5 and 6 (i.e., (pseudo-R)). It doesn’t matter if you are the only semiotic agent in the entire universe who has discovered its binary-numerical structure.
"That discovery is itself an objective fact about the world... that sequence would not rightly be attributed to chance precisely because you were the one person in the universe to appreciate its structure."
"... Since specifications are those patterns that are supposed to underwrite a design inference, they need, minimally, to entitle us to eliminate chance. Since to do so, it must be the case that
“… we therefore define specifications as any patterns T that satisfy this inequality... specifications are those patterns whose specified complexity is strictly greater than 1."
"Note that this definition automatically implies a parallel definition for context-dependent specifications...”
“Such context-dependent specifications are widely employed in adjudicating between chance and design (cf. the Da Vinci Code example). Yet, to be secure in eliminating chance and inferring design on the scale of the universe, we need the context-independent form of specification.”
“As an example of specification and specified complexity in their context-independent form, let us return to the bacterial flagellum… “bidirectional rotary motor-driven propeller.” This description corresponds to a pattern T... given a natural language (English) lexicon with 100,000 (= 10^5) basic concepts... we estimated the complexity of this pattern at approximately… 10^20"
“These preliminary indicators point to T’s specified complexity being greater than 1 and to T in fact constituting a specification.”
8. Design Detection
“Having defined specification, I want next to show how this concept works in eliminating chance and inferring design."
"Inferring design by eliminating chance is an old problem."
"Almost 300-years ago, the mathematician Abraham de Moivre addressed it as follows:
"Resolving this competition between chance and design is the whole point of specification.”
“... specified complexity is an adequate tool for eliminating individual chance hypotheses.”
“Nor is it reason to be skeptical of a design inference based on specified complexity.”
“... in the Miller-Urey experiment, various compounds were placed in an apparatus, zapped with sparks to simulate lightning, and then the product was collected in a trap. Lo and behold, biologically significant chemical compounds were discovered, notably certain amino acids."
"In the 1950s, when this experiment was performed, it was touted as showing that a purely chemical solution to the origin of life was just around the corner. Since then, this enthusiasm has waned because such experiments merely yield certain rudimentary building blocks for life. No experiments since then have shown how these building blocks could, by purely chemical means (and thus apart from design), be built up into complex biomolecular systems needed for life (like proteins and multiprotein assemblages, to say nothing of fully functioning cells)...”
“... if large probabilities vindicate chance and defeat design, why shouldn’t small probabilities do the opposite — vindicate design and defeat chance? Indeed, in many special sciences, everything from forensics to archeology to SETI (the Search for Extraterrestrial Intelligence), small probabilities do just that."
"Objections only get raised against inferring design on the basis of such small probability, chance elimination arguments when the designers implicated by them are unacceptable to a materialistic worldview, as happens at the origin of life, whose designer could not be an intelligence that evolved through purely materialistic processes."
"Parity of reasoning demands that if large probabilities vindicate chance and defeat design, then small probabilities should vindicate design and defeat chance."
"The job of specified complexity is to marshal these small probabilities in a way that convincingly defeats chance and vindicates design."
“At this point, critics of specified complexity raise two objections. First, they contend that because we can never know all the chance hypotheses responsible for a given outcome, to infer design because specified complexity eliminates a limited set of chance hypotheses constitutes an argument from ignorance. But this criticism is misconceived. The argument from ignorance, also known as the appeal to ignorance or by the Latin argumentum ad ignorantiam, is
“Eliminative inductions argue for the truth of a proposition by actively refuting its competitors (and not, as in arguments from ignorance, by noting that the proposition has yet to be refuted). Provided that the proposition along with its competitors form a mutually exclusive and exhaustive class, eliminating all the competitors entails that the proposition is true. (Recall Sherlock Holmes’s famous dictum:
"But eliminative inductions can be convincing without knocking down every conceivable alternative, a point John Earman has argued effectively. Earman has shown that eliminative inductions are not just widely employed in the sciences but also indispensable to science..."
"... the other objection, namely, that we must know something about a designer’s nature, purposes, propensities, causal powers, and methods of implementing design before we can legitimately determine whether an object is designed. I refer to the requirement that we must have this independent knowledge of designers as the independent knowledge requirement. This requirement, so we are told, can be met for materially embodied intelligences but can never be met for intelligences that cannot be reduced to matter, energy, and their interactions."
"By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause."
“To see that the independent knowledge requirement, as a principle for deciding whether something is designed, is fundamentally misguided, consider the following admission by Elliott Sober, who otherwise happens to embrace this requirement:
"...what if the designer actually responsible for the object brought it about by means unfathomable to us (e.g., by some undreamt of technologies)? This is the problem of multiple realizability, and it undercuts the independent knowledge requirement because it points up that what leads us to infer design is not knowledge of designers and their capabilities but knowledge of the patterns exhibited by designed objects (a point that specified complexity captures precisely).”
"This last point underscores another problem with the independent knowledge requirement, namely, what I call the problem of inductive regress."
"Suppose... one wants to argue that independent knowledge of designers is the key to inferring design… Consider now some archeologists in the field who stumble across an arrowhead. How do they know that it is indeed an arrowhead and thus the product of design? What sort of archeological background knowledge had to go into their design hypothesis? Certainly, the archeologists would need past experience with arrowheads. But how did they recognize that the arrowheads in their past experience were designed? Did they see humans actually manufacture those arrowheads? …"
“Our ability to recognize design must therefore arise independently of induction and therefore independently of any independent knowledge requirement about the capacities of designers. In fact, it arises directly from the patterns in the world that signal intelligence, to wit, from specifications.”
“Another problem with the independent knowledge requirement is that it hinders us from inferring design that outstrips our intellectual and technological sophistication. I call this the problem of dummied down design: the independent knowledge requirement limits our ability to detect design to the limits we impose on designers. But such limits are artificial.”
“Suppose, for instance, that the molecular biology of the cell is in fact intelligently designed. If so, it represents nanotechnology of a sophistication far beyond anything that human engineers are presently capable of or may ever be capable of.”
“By the independent knowledge requirement, we have no direct experience of designers capable of such design work. Thus, even if system after molecular biological system exhibited high specified complexity, the independent evidence requirement would prevent us from recognizing their design and keep us wedded to wholly inadequate materialistic explanations of these systems.”
“… Should we now think that life at key moments in its history was designed?”
“... it is a necessary condition, if a design inference is to hold, that all relevant chance hypotheses be eliminated.”
“... unknown chance hypotheses have no epistemic significance in defeating design.”
"... specified complexity has rendered all relevant chance alternatives inviable, chance as such is eliminated and design can no longer be denied.”
ACKNOWLEDGMENT:
“By separating off prespecifications from specifications, the account of specifications becomes much more straightforward. With specifications, the key to overturning chance is to keep the descriptive complexity of patterns low [see Note 24]”
“…descriptive complexity immediately confers conditional independence… [see Note 24]”
[Note 24: There is a well-established theory of descriptive complexity, which takes as its point of departure Chaitin-Kolmogorov-Solomonoff theory of bit-string compressibility, namely, the theory of Minimal Description Length (MDL). The fundamental idea behind MDL is that order in data “can be used to compress the data, i.e., to describe it using fewer symbols than needed to describe the data literally.” See http:///(last accessed June 17, 2005)]
“…prespecifications need not be descriptively simple."
"Think of a coin that’s flipped 1,000 times. The pattern it exhibits will (in all likelihood) be unique in the history of coin tossing and will not be identifiable apart from the actual event of flipping that coin 1,000 times. Such a pattern, if a prespecification, will be tractable but it will not be descriptively simple.”
“On the other hand, a Champernowne sequence of length 1,000 can be readily constructed on the basis of a simple number-theoretic scheme. The underlying pattern here, in virtue of its descriptive simplicity, is therefore tractable with respect to information that is conditionally independent of any actual coin tossing event.”
“… in the treatment of specification given here, we have a universal probability bound of 10^-120 …probability bounds are better (i.e., more useful in scientific applications) the bigger they are — provided, of course that they truly are universal [see next equation]”
“… instead of a static universal probability bound of 10^-150, we now have a dynamic one [the previous equation]… that varies with the specificational resources… and thus with the descriptive complexity of T [see Note 24, above]”
“For many design inferences that come up in practice, it seems safe to assume that [the denominator of the previous equation] will not exceed 10^30 (for instance, in section 7 a very generous estimate for the descriptive complexity of the bacterial flagellum came out to 10^20) [see Note 24, above]”
“… In my present treatment, specified complexity… is now not merely a property but an actual number calculated by a precise formula [the first equation posted here]”
“This number can be negative, zero, or positive. When the number is greater than 1, it indicates that we are dealing with a specification [the second equation posted here].”
Addendum 2: Bayesian Methods
“The approach to design detection that I propose eliminates chance hypotheses when the probability of matching a suitable pattern (i.e., specification) on the basis of these chance hypotheses is small and yet an event matching the pattern still happens (i.e., the arrow lands in a small target). This eliminative approach to statistical rationality, as epitomized in Fisher’s approach to significance testing, is the one most widely employed in scientific applications.”
“Nevertheless, there is an alternative approach to statistical rationality that is at odds with this eliminative approach. This is the Bayesian approach, which is essentially comparative rather than eliminative, comparing the probability of an event conditional on a chance hypothesis to its probability conditional on a design hypothesis, and preferring the hypothesis that confers the greater probability. I’ve argued at length elsewhere that Bayesian methods are inadequate for drawing design inferences.”
“Among the reasons I’ve given is the need to assess prior probabilities in employing these methods, the concomitant problem of rationally grounding these priors, and the lack of empirical grounding in estimating probabilities conditional on design hypotheses.”
“… the most damning problem facing the Bayesian approach to design detection, namely, that it tacitly presupposes the very account of specification that it was meant to preclude.”
“Bayesian theorists see specification as an incongruous and dispensable feature of design inferences. For instance, Timothy and Lydia McGrew, at a symposium on design reasoning, dismissed specification as having no “epistemic relevance.” “
“… the Bayesian approach to statistical rationality is parasitic on the Fisherian approach and can properly adjudicate only among competing hypotheses that the Fisherian approach has thus far failed to eliminate.”
“In particular, the Bayesian approach offers no account of how it arrives at the composite events (qua targets qua patterns qua specifications) on which it performs a Bayesian analysis. The selection of such events is highly intentional and, in the case of Bayesian design inferences, presupposes an account of specification."
"Specification’s role in detecting design, far from being refuted by the Bayesian approach, is therefore implicit throughout Bayesian design inferences.”
In: the Mathematical Foundations of Intelligent Design.
From the Abastract:
"Specification denotes the type of pattern that highly improbable events must exhibit before one is entitled to attribute them to intelligence. This paper analyzes the concept of specification and shows how it applies to design detection... the fundamental question of Intelligent Design (ID) [is]: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?"
From the Full Text:
1. Specification as a Form of Warrant
"...For [Alvin] Plantinga, warrant is what turns true belief into knowledge."
"It is not enough merely to believe that something is true; there also has to be something backing up that belief."
"...specification functions like Plantinga’s notion of warrant..."
"...specification is what must be added to highly improbable events before one is entitled to attribute them to design."
"... specification constitutes a probabilistic form of warrant, transforming the suspicion of design into a warranted belief in design... a systematic procedure for sorting through circumstantial evidence for design."
"... You are walking outside and find an unusual chunk of rock. You suspect it might be an arrowhead. Is it truly an arrowhead, and thus the result of design, or is it just a random chunk of rock... we have no direct knowledge of any putative designer and no direct knowledge of how such a designer, if actual, fashioned the item in question. All we see is the pattern exhibited by the item..."
2. Fisherian Significance Testing
"... In Fisher’s approach to testing the statistical significance of hypotheses, one is justified in rejecting (or eliminating) a chance hypothesis provided that a sample falls within a prespecified rejection region (also known as a critical region)... In Fisher’s approach, if the coin lands ten heads in a row, then one is justified rejecting the chance hypothesis."
"... In the applied statistics literature, it is common to see significance levels of .05 and .01. The problem to date has been that any such proposed significance levels have seemed arbitrary, lacking “a rational foundation.”"
"... significance levels cannot be set in isolation but must always be set in relation to the probabilistic resources relevant to an event’s occurrence."
"... essentially, the idea is to make a target so small that an archer is highly unlikely to hit it by chance..."
"Rejection regions eliminate chance hypotheses when events that supposedly happened in accord with those hypotheses fall within the rejection regions."
"... the probability of getting 100 heads in a row is roughly 1 in 10^30, which is drastically smaller than 1 in 9 million. Within Fisher’s theory of statistical significance testing, a prespecified event of such small probability is enough to disconfirm the chance hypothesis."
3. Specifications via Probability Densities
"... within Fisher’s approach to hypothesis testing the probability density function f is used to identify rejection regions that in turn are used to eliminate chance... [function f is nonnegative]... Since f cannot fall below zero, we can think of the landscape as never dipping below sea-level."
"In this last example we considered extremal sets... at which the probability density function concentrates minimal probability."
"... Although the combinatorics involved with the multinomial distribution are complicated (hence the common practice of approximating it with continuous probability distributions like the chi-square distribution), the reference class of possibilities Omega, though large, is finite... [and its cardinality, i.e., the number of its elements, is well-defined (its order of magnitude is around 10^33)]"
4. Specifications via Compressibility
"... The problem algorithmic information theory [the Chaitin-Kolmogorov-Solomonoff theory] seeks to resolve is this: Given probability theory and its usual way of calculating probabilities for coin tosses, how is it possible to distinguish these sequences in terms of their degree of randomness?"
"Probability theory alone is not enough."
"... Chaitin, Kolmogorov, and Solomonoff supplemented conventional probability theory with some ideas from recursion theory, a subfield of mathematical logic that provides the theoretical underpinnings for computer science... a string of 0s [zeroes] and 1s [ones] becomes increasingly random as the shortest computer program that generates the string increases in length."
"For the moment, we can think of a computer program as a short-hand description of a sequence of coin tosses."
Thus, the sequence (N):
11111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111
is not very random because it has a very short description, namely,
repeat ‘1’ a hundred times."
"... we are interested in the shortest descriptions since any sequence can always be described in terms of itself."
"The sequence (H):
11111111111111111111111111111111111111111111111111
00000000000000000000000000000000000000000000000000
is slightly more random than (N) since it requires a longer description, for example,
repeat ‘1’ fifty times, then repeat ‘0’ fifty times."
"... the sequence (A):
10101010101010101010101010101010101010101010101010
10101010101010101010101010101010101010101010101010
has a short description,
repeat ‘10’ fifty times."
"The sequence (R) [see below], on the other hand, has no short and neat description (at least none that has yet been discovered). For this reason, algorithmic information theory assigns it a higher degree of randomness than the sequences (N), (H), and (A)."
"Since one can always describe a sequence in terms of itself, (R) has the description"
copy ‘11000011010110001101111111010001100011011001110111
00011001000010111101110110011111010010100101011110’.
"Because (R) was constructed by flipping a coin, it is very likely that this is the shortest description of (R)."
"It is a combinatorial fact that the vast majority of sequences of 0s and 1s have as their shortest description just the sequence itself."
"... most sequences are random in the sense of being algorithmically incompressible."
"... the collection of nonrandom sequences has small probability among the totality of sequences so that observing a nonrandom sequence is reason to look for explanations other than chance."
"... Kolmogorov even invoked the language of statistical mechanics to describe this result, calling the random sequences high entropy sequences, and the nonrandom sequence low entropy sequences."
"... the collection of algorithmically compressible (and therefore nonrandom) sequences has small probability among the totality of sequences, so that observing such a sequence is reason to look for explanations other than chance."
5. Prespecifications vs. Specifications
"... specifications are patterns delineating events of small probability whose occurrence cannot reasonably be attributed to chance."
"... we did see some clear instances of patterns being identified after the occurrence of events and yet being convincingly used to preclude chance in the explanation of those events (cf. the rejection regions induced by probability density functions as well as classes of highly compressible bit strings...) ... for such after-the-event patterns, some additional restrictions needed to be placed on the patterns to ensure that they would convincingly eliminate chance..."
"... before-the-event patterns, which we called prespecifications, require no such restrictions."
"... prespecified events of small probability are very difficult to recreate by chance."
"It’s one thing for highly improbable chance events to happen once. But for them to happen twice is just too unlikely. The intuition here is the widely accepted folk-wisdom that says “lightning doesn’t strike twice in the same place.” "
"... this intuition is taken quite seriously in the sciences. It is, for instance, the reason origin-of-life researchers tend to see the origin of the genetic code as a one-time event. Although there are some variations, the genetic code is essentially universal. Thus, for the same genetic code to emerge twice by undirected material mechanisms would simply be too improbable."
"With the sequence (R, see above) treated as a prespecification, its chance occurrence is not in question but rather its chance reoccurrence..."
"... specifications... patterns that nail down design and therefore that inherently lie beyond the reach of chance."
"Are there patterns that, if exhibited in events, would rule out their original occurrence by chance?"
"... the answer is yes, consider the following sequence (again, treating “1” as heads and “0” as tails; note that the designation pseudo-R here is meant to suggest pseudo-randomness):
(pseudo-R)
01000110110000010100111001011101110000000100100011
01000101011001111000100110101011110011011110111100."
"... how will you determine whether (pseudo-R) happened by chance?"
"One approach is to employ statistical tests for randomness."
"... to distinguish the truly random from the pseudo-random sequences. In a hundred coin flips, one is quite likely to see six or seven ... repetitions [see Note 21]"
[Note 21: “The proof is straightforward: In 100 coin tosses, on average half will repeat the previous toss, implying about 50 two-repetitions. Of these 50 two-repetitions, on average half will repeat the previous toss, implying about 25 three-repetitions. Continuing in this vein, we find on average 12 four-repetitions, 6 five-repetitions, 3 six-repetitions, and 1 seven-repetition. See Ivars Peterson, The Jungles of Randomness: A Mathematical Safari (New York: Wiley, 1998), 5.]
"On the other hand, people concocting pseudo-random sequences with their minds tend to alternate between heads and tails too frequently."
"Whereas with a truly random sequence of coin tosses there is a 50 percent chance that one toss will differ from the next, as a matter of human psychology people expect that one toss will differ from the next around 70 percent of the time."
"... after three or four repetitions, humans trying to mimic coin tossing with their minds tend to think its time for a change whereas coins being tossed at random suffer no such misconception."
"... (R) resulted from chance because it represents an actual sequence of coin tosses. What about (pseudo-R)?"
"... (pseudo-R) is anything but random. To see this, rewrite this sequence by inserting vertical strokes as follows:
(pseudo-R)
01000110110000010100111001011101110000000100100011
01000101011001111000100110101011110011011110111100
""By dividing (pseudo-R) this way it becomes evident that this sequence was constructed simply by writing binary numbers in ascending lexicographic order, starting with the one-digit binary numbers (i.e., 0 and 1), proceeding to the two-digit binary numbers (i.e., 00, 01, 10, and 11), and continuing until 100 digits were recorded."
"... (pseudo-R), when continued indefinitely, is known as the Champernowne sequence and has the property that any N-digit combination of bits appears in this sequence with limiting frequency 2^-N."
“D. G. Champernowne identified this sequence back in 1933."
"The key to defining specifications and distinguishing them from prespecifications lies in understanding the difference between sequences such as (R) and (pseudo-R)."
"The coin tossing events signified by (R) and (pseudo-R) are each highly improbable [to be exactly reproduced]."
"... (R), for all we know [did] arise by chance whereas (pseudo-R) cannot plausibly be attributed to chance."
6. Specificity
"The crucial difference between (R) and (pseudo-R) is that (pseudo-R) exhibits a simple, easily described pattern whereas (R) does not."
"To describe (pseudo-R), it is enough to note that this sequence lists binary numbers in increasing order."
"By contrast, (R)cannot, so far as we can tell, be described any more simply than by repeating the sequence."
"Thus, what makes the pattern exhibited by (pseudo-R) a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance."
"It’s this combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance) that makes the pattern exhibited by (pseudo-R) — but not (R) — a specification [see Note 23]"
[Note 23: It follows that specification is intimately connected with discussions in the self-organizational literature about the “edge of chaos,” in which interesting self-organizational events happen not where things are completely chaotic (i.e., entirely chance-driven and thus not easily describable)... See Roger Lewin, Complexity: Life at the Edge of Chaos, 2nd ed. (Chicago: University of Chicago Press, 2000)]
"... “displaying an even face” describes a pattern that maps onto a (composite) event in which a die lands either two, four, or six."
"... If “bidirectional,” “rotary,” “motor-driven,” and “propeller” are basic concepts, then the molecular machine known as the bacterial flagellum can be characterized as a 4-level concept of the form “bidirectional rotary motor-driven propeller.”
"Now, there are approximately N = 10^20 concepts of level 4 or less, which therefore constitute the specificational resources relevant to characterizing the bacterial flagellum."
"... Hitting large targets by chance is not a problem. Hitting small targets by chance can be."
"... putting the logarithm to the base 2... has the effect of changing scale and directionality, turning probabilities into number of bits and thereby making the specificity a measure of information... This logarithmic transformation therefore ensures that the simpler the patterns and the smaller the probability of the targets they constrain, the larger specificity."
"To see that the specificity so defined corresponds to our intuitions about specificity in general, think of the game of poker and consider the following three descriptions of poker hands: ... the probability of one pair far exceeds the probability of a full house which, in turn, far exceeds the probability of a royal flush. Indeed, there are only 4 distinct royal-flush hands but 3744 distinct full-house hands and 1,098,240 distinct single-pair hands... when we take the negative logarithm to the base 2, the specificity associated with the full house pattern will be about 10 less than the specificity associated with the royal flush pattern. Likewise, the specificity of the single pair pattern will be about 10 less than that of the full house pattern."
"... consider the following description of a poker hand: “four aces and the king of diamonds.” ...this description is about as simple as it can be made. Since there is precisely one poker hand that conforms to this description, its probability will be one-fourth the probability of getting a royal flush."
"... specificity... includes not just absolute specificity but also the cost of describing the pattern in question. Once this cost is included, the specificity of “royal flush” exceeds than the specificity of “four aces and the king of diamonds.”
7. Specified Complexity
"... the following example from Dan Brown’s... The Da Vinci Code. The heroes, Robert Langdon and Sophie Neveu, find themselves in an ultra-secure, completely automated portion of a Swiss bank (“the Depository Bank of Zurich”). Sophie’s grandfather, before dying, had revealed the following ten digits separated by hyphens: 13-3-2-21-1-1-8-5
... Sophie said, frowning. “Looks like we only get one try.” Standard ATM machines allowed users three attempts to type in a PIN before confiscating their bank card. This was obviously no ordinary cash machine..."[The digits that Sophie’ grandfather made sure she received posthumously, namely, 13-3-2-21-1-1-8-5, can be rearranged as 1-1-2-3-5-8-13-21, which are the first eight numbers in the famous Fibonacci sequence. In this sequence, numbers are formed by adding the two immediately preceding numbers. The Fibonacci sequence has some interesting mathematical properties and even has applications to biology.
She finished typing the entry and gave a sly smile. “Something that appeared random but was not.”
Ref. 30 "For the mathematics of the Fibonacci sequence, see G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers, 5th ed. (Oxford: Clarendon Press, 1979), 148–153. For an application of this sequence to biology, see Ian Stewart, Life’s Other Secret: The New Mathematics of the Living World (New York: Wiley, 1998), 122–132."]
"Robert and Sophie punch in the Fibonacci sequence 1123581321 and retrieve the crucial information they are seeking."
"This sequence, if produced at random (i.e., with respect to the uniform probability distribution denoted by H), would have probability 10^-10, or 1 in 10 billion."
"... for typical ATM cards, there are usually sixteen digits, and so the probability is typically on the order of 10^-15 (not 10^-16 because the first digit is usually fixed; for instance, Visa cards all begin with the digit “4”)."
"This bank wants only customers with specific information about their accounts to be able to access those accounts. It does not want accounts to be accessed by chance."
"... If, for instance, account numbers were limited to three digits, there would be at most 1,000 different account numbers, and so, with millions of users, it would be routine that accounts would be accessed accidentally"
"Theoretical computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed... This number sets an upper limit on the number of agents that can be embodied in the universe and the number of events that, in principle, they can observe."
Accordingly, for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity of T given H (minus the tilde and context sensitivity) as [From Note 46, “Lloyd’s approach [10^120] is more elegant and employs deeper insights into physics. In consequence, his approach yields a more precise estimate for the universal probability bound.”]
"... it is enough to note two things:
(1) there is never any need to consider replicational resources M·N that exceed 10^120 (say, by invoking inflationary cosmologies or quantum many-worlds) because to do so leads to a wholesale breakdown in statistical reasoning, and that’s something no one in his saner moments is prepared to do (for the details about the fallacy of inflating one’s replicational resources beyond the limits of the known, observable universe, see my article (in PDF) “The Chance of the Gaps”[Dembski, W.A. “The Chance of the Gaps,” in Neil Manson, ed., God and Design: The Teleological Argument and Modern Science (London: Routledge, 2002), 251–274.])
(2) ... the elimination of chance only requires a single semiotic agent who has discovered the pattern in an event that unmasks its non-chance nature. Recall the Champernowne sequence discussed in sections 5 and 6 (i.e., (pseudo-R)). It doesn’t matter if you are the only semiotic agent in the entire universe who has discovered its binary-numerical structure.
"That discovery is itself an objective fact about the world... that sequence would not rightly be attributed to chance precisely because you were the one person in the universe to appreciate its structure."
"... Since specifications are those patterns that are supposed to underwrite a design inference, they need, minimally, to entitle us to eliminate chance. Since to do so, it must be the case that
“… we therefore define specifications as any patterns T that satisfy this inequality... specifications are those patterns whose specified complexity is strictly greater than 1."
"Note that this definition automatically implies a parallel definition for context-dependent specifications...”
“Such context-dependent specifications are widely employed in adjudicating between chance and design (cf. the Da Vinci Code example). Yet, to be secure in eliminating chance and inferring design on the scale of the universe, we need the context-independent form of specification.”
“As an example of specification and specified complexity in their context-independent form, let us return to the bacterial flagellum… “bidirectional rotary motor-driven propeller.” This description corresponds to a pattern T... given a natural language (English) lexicon with 100,000 (= 10^5) basic concepts... we estimated the complexity of this pattern at approximately… 10^20"
“These preliminary indicators point to T’s specified complexity being greater than 1 and to T in fact constituting a specification.”
8. Design Detection
“Having defined specification, I want next to show how this concept works in eliminating chance and inferring design."
"Inferring design by eliminating chance is an old problem."
"Almost 300-years ago, the mathematician Abraham de Moivre addressed it as follows:
“… We may imagine Chance and Design to be, as it were, in Competition with each other, for the production of some sorts of Events, and may calculate what Probability there is, that those Events should be rather owing to one than to the other. To give a familiar Instance of this, Let us suppose that two Packs of Piquet-Cards being sent for, it should be perceived that there is, from Top to Bottom, the same Disposition of the Cards in both packs; let us likewise suppose that, some doubt arising about this Disposition of the Cards, it should be questioned whether it ought to be attributed to Chance, or to the Maker’s Design: In this Case the Doctrine of Combinations decides the Question; since it may be proved by its Rules, that there are the odds of above 263130830000 Millions of Millions of Millions of Millions to One, that the Cards were designedly set in the Order in which they were found” [Abraham de Moivre, The Doctrine of Chances (1718; reprinted New York: Chelsea, 1967), v.]“... de Moivre requires a highly improbable prespecified event in which one ordered deck of cards rules out the chance reoccurrence of another with the same order. In particular, he places chance and design in competition so that the defeat of chance on the basis of improbability leads to the victory of design."
"Resolving this competition between chance and design is the whole point of specification.”
“... specified complexity is an adequate tool for eliminating individual chance hypotheses.”
“Nor is it reason to be skeptical of a design inference based on specified complexity.”
“... in the Miller-Urey experiment, various compounds were placed in an apparatus, zapped with sparks to simulate lightning, and then the product was collected in a trap. Lo and behold, biologically significant chemical compounds were discovered, notably certain amino acids."
"In the 1950s, when this experiment was performed, it was touted as showing that a purely chemical solution to the origin of life was just around the corner. Since then, this enthusiasm has waned because such experiments merely yield certain rudimentary building blocks for life. No experiments since then have shown how these building blocks could, by purely chemical means (and thus apart from design), be built up into complex biomolecular systems needed for life (like proteins and multiprotein assemblages, to say nothing of fully functioning cells)...”
“... if large probabilities vindicate chance and defeat design, why shouldn’t small probabilities do the opposite — vindicate design and defeat chance? Indeed, in many special sciences, everything from forensics to archeology to SETI (the Search for Extraterrestrial Intelligence), small probabilities do just that."
"Objections only get raised against inferring design on the basis of such small probability, chance elimination arguments when the designers implicated by them are unacceptable to a materialistic worldview, as happens at the origin of life, whose designer could not be an intelligence that evolved through purely materialistic processes."
"Parity of reasoning demands that if large probabilities vindicate chance and defeat design, then small probabilities should vindicate design and defeat chance."
"The job of specified complexity is to marshal these small probabilities in a way that convincingly defeats chance and vindicates design."
“At this point, critics of specified complexity raise two objections. First, they contend that because we can never know all the chance hypotheses responsible for a given outcome, to infer design because specified complexity eliminates a limited set of chance hypotheses constitutes an argument from ignorance. But this criticism is misconceived. The argument from ignorance, also known as the appeal to ignorance or by the Latin argumentum ad ignorantiam, is
“... the fallacy of arguing that something must be true because nobody can prove it false or, alternatively, that something must be false because nobody can prove it true. Such arguments involve the illogical notion that one can view the lack of evidence about a proposition as being positive evidence for it or against it. But lack of evidence is lack of evidence, and supports no conclusion. An example of an appeal to ignorance: Styrofoam cups must be safe; after all, no studies have implicated them in cancer. The argument is fallacious because it is possible that no studies have been done on those cups or that what studies have been done have not focused on cancer (as opposed to other diseases).”“In eliminating chance and inferring design, specified complexity is not party to an argument from ignorance. Rather, it is underwriting an eliminative induction.”
“Eliminative inductions argue for the truth of a proposition by actively refuting its competitors (and not, as in arguments from ignorance, by noting that the proposition has yet to be refuted). Provided that the proposition along with its competitors form a mutually exclusive and exhaustive class, eliminating all the competitors entails that the proposition is true. (Recall Sherlock Holmes’s famous dictum:
“When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”“This is the ideal case, in which eliminative inductions in fact become deductions."
"But eliminative inductions can be convincing without knocking down every conceivable alternative, a point John Earman has argued effectively. Earman has shown that eliminative inductions are not just widely employed in the sciences but also indispensable to science..."
"... the other objection, namely, that we must know something about a designer’s nature, purposes, propensities, causal powers, and methods of implementing design before we can legitimately determine whether an object is designed. I refer to the requirement that we must have this independent knowledge of designers as the independent knowledge requirement. This requirement, so we are told, can be met for materially embodied intelligences but can never be met for intelligences that cannot be reduced to matter, energy, and their interactions."
"By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause."
“To see that the independent knowledge requirement, as a principle for deciding whether something is designed, is fundamentally misguided, consider the following admission by Elliott Sober, who otherwise happens to embrace this requirement:
“To infer watchmaker from watch, you needn’t know exactly what the watchmaker had in mind; indeed, you don’t even have to know that the watch is a device for measuring time. Archaeologists sometimes unearth tools of unknown function, but still reasonably draw the inference that these things are, in fact, tools.”"Sober’s remark suggests that design inferences may look strictly to features of designed objects and thus presuppose no knowledge about the characteristics of the designer.”
"...what if the designer actually responsible for the object brought it about by means unfathomable to us (e.g., by some undreamt of technologies)? This is the problem of multiple realizability, and it undercuts the independent knowledge requirement because it points up that what leads us to infer design is not knowledge of designers and their capabilities but knowledge of the patterns exhibited by designed objects (a point that specified complexity captures precisely).”
"This last point underscores another problem with the independent knowledge requirement, namely, what I call the problem of inductive regress."
"Suppose... one wants to argue that independent knowledge of designers is the key to inferring design… Consider now some archeologists in the field who stumble across an arrowhead. How do they know that it is indeed an arrowhead and thus the product of design? What sort of archeological background knowledge had to go into their design hypothesis? Certainly, the archeologists would need past experience with arrowheads. But how did they recognize that the arrowheads in their past experience were designed? Did they see humans actually manufacture those arrowheads? …"
“Our ability to recognize design must therefore arise independently of induction and therefore independently of any independent knowledge requirement about the capacities of designers. In fact, it arises directly from the patterns in the world that signal intelligence, to wit, from specifications.”
“Another problem with the independent knowledge requirement is that it hinders us from inferring design that outstrips our intellectual and technological sophistication. I call this the problem of dummied down design: the independent knowledge requirement limits our ability to detect design to the limits we impose on designers. But such limits are artificial.”
“Suppose, for instance, that the molecular biology of the cell is in fact intelligently designed. If so, it represents nanotechnology of a sophistication far beyond anything that human engineers are presently capable of or may ever be capable of.”
“By the independent knowledge requirement, we have no direct experience of designers capable of such design work. Thus, even if system after molecular biological system exhibited high specified complexity, the independent evidence requirement would prevent us from recognizing their design and keep us wedded to wholly inadequate materialistic explanations of these systems.”
“… Should we now think that life at key moments in its history was designed?”
“... it is a necessary condition, if a design inference is to hold, that all relevant chance hypotheses be eliminated.”
“... unknown chance hypotheses have no epistemic significance in defeating design.”
"... specified complexity has rendered all relevant chance alternatives inviable, chance as such is eliminated and design can no longer be denied.”
ACKNOWLEDGMENT:
I want to thank Jay Richards for organizing a symposium on design reasoning at Calvin College back in May, 2001 as well as for encouraging me to write up my thoughts about specification that emerged from this symposium… I also want to thank Rob Koons, Robin Collins, Tim and Lydia McGrew, and Del Ratzsch for their useful feedback at this symposium. Stephen Meyer and Paul Nelson were also at this symposium. They have now tracked my thoughts on specification for almost fifteen years and have been my best conversation partners on this topic. Finally, I wish to thank Richard Dawkins, whose remarks about specification and complexity in The Blind Watchmaker first got me thinking (back in 1989) that here was the key to eliminating chance and inferring design. As I’ve remarked to him in our correspondence (a correspondence that he initiated), “Thanks for all you continue to do to advance the work of intelligent design. You are an instrument in the hands of Providence however much you rail against it.”Addendum 1: Note to Readers or TDI & NFL
“By separating off prespecifications from specifications, the account of specifications becomes much more straightforward. With specifications, the key to overturning chance is to keep the descriptive complexity of patterns low [see Note 24]”
“…descriptive complexity immediately confers conditional independence… [see Note 24]”
[Note 24: There is a well-established theory of descriptive complexity, which takes as its point of departure Chaitin-Kolmogorov-Solomonoff theory of bit-string compressibility, namely, the theory of Minimal Description Length (MDL). The fundamental idea behind MDL is that order in data “can be used to compress the data, i.e., to describe it using fewer symbols than needed to describe the data literally.” See http:///(last accessed June 17, 2005)]
“…prespecifications need not be descriptively simple."
"Think of a coin that’s flipped 1,000 times. The pattern it exhibits will (in all likelihood) be unique in the history of coin tossing and will not be identifiable apart from the actual event of flipping that coin 1,000 times. Such a pattern, if a prespecification, will be tractable but it will not be descriptively simple.”
“On the other hand, a Champernowne sequence of length 1,000 can be readily constructed on the basis of a simple number-theoretic scheme. The underlying pattern here, in virtue of its descriptive simplicity, is therefore tractable with respect to information that is conditionally independent of any actual coin tossing event.”
“… in the treatment of specification given here, we have a universal probability bound of 10^-120 …probability bounds are better (i.e., more useful in scientific applications) the bigger they are — provided, of course that they truly are universal [see next equation]”
“… instead of a static universal probability bound of 10^-150, we now have a dynamic one [the previous equation]… that varies with the specificational resources… and thus with the descriptive complexity of T [see Note 24, above]”
“For many design inferences that come up in practice, it seems safe to assume that [the denominator of the previous equation] will not exceed 10^30 (for instance, in section 7 a very generous estimate for the descriptive complexity of the bacterial flagellum came out to 10^20) [see Note 24, above]”
From page 17:“… as a rule of thumb, 10^-120 / 10^30 = 10^-150 can still be taken as a reasonable (static) universal probability bound… the burden is on the design critic to show either that the chance hypothesis H is not applicable or that [the denominator of the previous equation] is much greater than previously suspected… for practical purposes, taking 10^-150 as a universal probability bound still works…”
“Each S can therefore rank order these patterns in an ascending order of descriptive complexity, the simpler coming before the more complex, and those of identical complexity being ordered arbitrarily. Given such a rank ordering, it is then convenient to define [the denominator of the previous equation] as follows:
the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T [see Note 26].”
“…Think of specificational resources as enumerating the number of tickets sold in a lottery: the more lottery tickets sold, the more likely someone is to win the lottery by chance.”
[Note 26: In characterizing how we eliminate chance and infer design, I’m providing what philosophers call a “rational reconstruction.” That is, I’m providing a theoretical framework for something we humans do without conscious attention to theoretical or formal principles. As I see it, semiotic agents like us tacitly assess the descriptive complexity of patterns on the basis of their own personal background knowledge. It is an interesting question, however, whether the complexity of patterns need not be relativized to such agents. Robert Koons considers this possibility by developing a notion of ontological complexity. See Robert Koons, “Are Probabilities indispensable to the Design Inference,” Progress in Complexity, Information, and Design 1(1) (2002): available online at http://www.iscid.org/papers/Koons_AreProbabilities_112701.pdf (last accessed June 21, 2005)]
“… In my present treatment, specified complexity… is now not merely a property but an actual number calculated by a precise formula [the first equation posted here]”
“This number can be negative, zero, or positive. When the number is greater than 1, it indicates that we are dealing with a specification [the second equation posted here].”
Addendum 2: Bayesian Methods
“The approach to design detection that I propose eliminates chance hypotheses when the probability of matching a suitable pattern (i.e., specification) on the basis of these chance hypotheses is small and yet an event matching the pattern still happens (i.e., the arrow lands in a small target). This eliminative approach to statistical rationality, as epitomized in Fisher’s approach to significance testing, is the one most widely employed in scientific applications.”
“Nevertheless, there is an alternative approach to statistical rationality that is at odds with this eliminative approach. This is the Bayesian approach, which is essentially comparative rather than eliminative, comparing the probability of an event conditional on a chance hypothesis to its probability conditional on a design hypothesis, and preferring the hypothesis that confers the greater probability. I’ve argued at length elsewhere that Bayesian methods are inadequate for drawing design inferences.”
“Among the reasons I’ve given is the need to assess prior probabilities in employing these methods, the concomitant problem of rationally grounding these priors, and the lack of empirical grounding in estimating probabilities conditional on design hypotheses.”
“… the most damning problem facing the Bayesian approach to design detection, namely, that it tacitly presupposes the very account of specification that it was meant to preclude.”
“Bayesian theorists see specification as an incongruous and dispensable feature of design inferences. For instance, Timothy and Lydia McGrew, at a symposium on design reasoning, dismissed specification as having no “epistemic relevance.” “
“… the Bayesian approach to statistical rationality is parasitic on the Fisherian approach and can properly adjudicate only among competing hypotheses that the Fisherian approach has thus far failed to eliminate.”
“In particular, the Bayesian approach offers no account of how it arrives at the composite events (qua targets qua patterns qua specifications) on which it performs a Bayesian analysis. The selection of such events is highly intentional and, in the case of Bayesian design inferences, presupposes an account of specification."
"Specification’s role in detecting design, far from being refuted by the Bayesian approach, is therefore implicit throughout Bayesian design inferences.”
0 Comments:
Post a Comment
<< Home