to follow advances in both biological and technological evolution, I believe that this observation is not precisely
correct. But let's first examine what complexity means.
Not surprisingly, the concept of complexity is complex. One concept of complexity is the minimum amount of
information required to represent a process. Let's say you have a design for a system (for example, a computer
program or a computer-assisted design file for a computer), which can be described by a data file containing one
million bits. We could say your design has a complexity of one million bits. But suppose we notice
that the one million
bits actually consist of a pattern of one thousand bits that is repeated one thousand times. We could note the
repetitions, remove the repeated patterns, and express the entire design in just over one thousand bits, thereby reducing
the size of the file by a factor of about one thousand.
The most popular data-compression techniques use similar methods of finding redundancy within information.
3
But after you've compressed a data file in this way, can you be absolutely certain that
there are no other rules or
methods that might be discovered that would enable you to express the file in even more compact terms? For example,
suppose my file was simply "pi" (3.1415...) expressed to one million bits of precision. Most data-compression
programs would fail to recognize this sequence and would not compress the million bits at all, since the bits in a binary
expression of pi are effectively random and thus have no repeated pattern according to all tests of randomness.
But if we can determine that the file (or a portion of the file) in fact represents pi, we can easily express it (or that
portion of it) very compactly as "pi to one million bits of accuracy." Since we can never be sure that we have not
overlooked some even more compact representation of
an information sequence, any amount of compression sets only
an upper bound for the complexity of the information. Murray Gell-Mann provides one definition of complexity along
these lines. He defines the "algorithmic information content" (Ale) of a set of information as "the length of the shortest
program that will cause a standard universal computer to print out the string of bits and then halt."
4
However, Gell-Mann's concept is not fully adequate. If we have a file with random information, it cannot be
compressed. That observation is, in fact, a key criterion for determining if a sequence of numbers is truly random.
However, if any random sequence will
do for a particular design, then this information can be characterized by a
simple instruction, such as "put random sequence of numbers here." So the random sequence, whether it's ten bits or
one billion bits, does not represent a significant amount of complexity, because it is characterized by a simple
instruction. This is the difference between a random sequence and an unpredictable sequence of
information that has
purpose.
To gain some further insight into the nature of complexity, consider the complexity of a rock. If we were to
characterize all of the properties (precise location, angular momentum, spin, velocity, and so on) of
every atom in the
rock, we would have a vast amount of information. A one-kilogram (2.2-pound) rock has 10
25
atoms which, as I will
discuss in the next chapter, can hold up to 10
27
bits of information. That's one hundred million billion times more
information than the genetic code of a human (even without compressing the genetic code).
5
But for most common
purposes, the bulk of this information is largely random and of little consequence. So we can characterize the rock for
most purposes with far less information just by specifying its shape and the type of material of which it is made. Thus,
it is reasonable to consider the complexity of an ordinary rock to be far less than that of a
human even though the rock
theoretically contains vast amounts of information.
6
One concept of complexity is the minimum amount of
Do'stlaringiz bilan baham: