more intelligent results. Wolfram say that the class 4 automata and an evolutionary algorithm are
"computationally equivalent." But that is true only on what I consider the "hardware" level. On the software
level, the other of the patterns produced are clearly different an of a different order of complexity and
usefulness.
An evolutionary algorithm can start with randomly generated potential solutions to a problem, which are
encoded in a digital genetic code. We then have the solutions compete with one another in a simulated
evolutionary battle. The better solutions survive and procreate in a simulated sexual reproduction in which
offspring solutions are created, drawing their genetic code (encoded solutions) from two parents. We can
also introduce a rate of genetic mutation. Various high-level parameters
of this process, such as the rate of
mutation, the rate of offspring, and so on, are appropriately called "God parameters," and it is the job of the
engineer designing the evolutionary algorithm to set them to reasonably optimal values. The process is run
for many thousands of generations of simulated evolution, and at the end of the process one is likely to find
solutions that are of a distinctly higher order than the starting ones.
The results of these evolutionary (sometimes called genetic)
algorithms can be elegant, beautiful, and
intelligent solutions to complex problems. They have been used, for example, to create artistic designs and
designs for artificial life-forms, as well as to execute a wide range of practical assignments such as
designing jet engines. Genetic algorithms are one approach to "narrow" artificial intelligence—that is,
creating systems that can perform particular functions that used to require the application of human
intelligence.
But something is still missing. Although genetic algorithms are a useful tool
in solving specific problems,
they have never achieved anything resembling "strong AI"—that is, aptitude resembling the broad, deep,
and subtle features of human intelligence, particularly its power of pattern recognition and command
language. Is the problem that we are not running the evolutionary algorithms long enough? After all, humans
evolved through a process that took billions of years. Perhaps we cannot re-create that process with just a
few days or weeks of computer simulation. This won't work, however, because conventional genetic
algorithms reach an asymptote
in their level of performance, so running them for a longer period of time
won't help.
A third level (beyond the ability of cellular processes to produce apparent randomness and genetic
algorithms to produce focused intelligent solutions) is to perform evolution on multiple levels. Conventional
genetic algorithms allow evolution only within the confines of a narrow problem and a single means of
evolution. The genetic code itself needs to evolve; the rules of evolution need to evolve. Nature did not stay
with a single chromosome, for example. There have been many levels of indirection
incorporated in the
natural evolutionary process. And we require a complex environment in which the evolution takes place.
To build strong AI we will have the opportunity to short-circuit this process, however, by reverse-
engineering the human brain, a project well under way, thereby benefiting from the evolutionary process that
has already taken place. We will be applying evolutionary algorithms within these solutions just as the
human brain does. For example, the fetal wiring is initially random within constraints
specified in the genome
in at least some regions. Recent research shows that areas having to do with learning undergo more
change, whereas structures having to do with sensory processing experience less change after birth.
72
Wolfram make the valid point that certain (indeed, most) computational processes are not predictable.
In other words, we cannot predict future state without running the entire process, I agree with him that we
can know the answer in advance only if somehow we can simulate a process at a faster speed. Given that
the universe runs at the fastest speed it can run, there is usually no way to short-circuit the process.
However, we have the benefits of the billions of years of evolution that have already taken place, which are
responsible for the greatly increased order of complexity in the natural world. We can
now benefit from it by
using out evolved tools to reverse engineer the products of biological evolution (most importantly, the human
brain).
Yes, it is true that some phenomena in nature that may appear complex at some level are merely the
results of simple underlying computational mechanisms that are essentially cellular automata at work. The
interesting pattern of triangles on a "tent oliuve" (cited extensively by Wolfram) or the intricate and varied
patterns of a snowflake are good example. I don't think this is a new observation, in that we've always
regarded the design of snowflakes to derive from a simple molecular computation-like building process.
However, Wolfram does provide us with a compelling theoretical foundation for expressing these processes
and their resulting patterns. But there is more to biology than class 4 patterns.
Another important these by Wolfram lies in his thorough treatment of
computation as a simple and
ubiquitous phenomenon. Of course, we've known for more than a century that computation is inherently
simple: we can build any possible level of complexity from a foundation of the simplest possible
manipulations of information.
For example, Charles Babbage's late-nineteenth-century mechanical computer (which never ran)
provided only a handful of operation codes, yet provided (within its memory capacity and speed) the same
kinds of transformations that modern computers do. The complexity of Babbage's invention stemmed only
from the details of its design, which indeed proved too difficult for Babbage to
implement using the
technology available to him.
The Turing machine, Alan Turing's theoretical conception of a universal computer in 1950, provides only
seven very basic commands, yet can be organized to perform any possible computation.
73
The existence of
a "universal Turing machine," which can simulate any possible Turing machine that is described on its tape
memory, is a further demonstration of the universality and simplicity of information.
74
In
Do'stlaringiz bilan baham: