Despite its status as an unsolvable problem (and one of the most famous), we can determine the busy-beaver
function for some
n
s. (Interestingly, it is also an unsolvable problem to separate those
n
s for which we can determine
the busy beaver of
n
from those for which we cannot.) For example, the busy beaver of 6 is easily determined to be 35.
With seven states, a Turing machine can multiply, so the busy beaver of 7 is much bigger: 22,961. With eight states, a
Turing machine can compute exponentials, so the busy beaver of 8 is even bigger: approximately 10
43
. We can see that
this is an "intelligent" function, in that it requires greater intelligence to solve for larger
n
s.
By the time we get to 10, a Turing machine can perform types of calculations that are impossible for a human to
follow (without help from a computer). So we were able to determine the busy beaver of 10 only with a computer's
assistance. The answer requires an exotic notation to write down, in which we have a stack of exponents, the height of
which is determined by another stack of exponents, the height of which is determined by another stack of exponents,
and so on. Because a computer can keep track of such complex numbers, whereas the human brain cannot, it appears
that computers will prove more capable of solving unsolvable problems than humans will.
The Criticism from Failure Rates
Jaron Lanier, Thomas Ray, and other observers all cite high failure rates of technology as a barrier to its continued
exponential growth. For example, Ray writes:
The most complex of our creations are showing alarming failure rates. Orbiting satellites and telescopes,
space shuttles, interplanetary probes, the Pentium chip, computer operating systems, all seem to be pushing
the limits of what we can effectively design and build through conventional approaches....Our most complex
software (operating systems and telecommunications control systems) already contains tens of millions of
lines of code. At present it seems unlikely that we can produce and manage software with hundreds of
millions or billions of lines of code.
32
First, we might ask what alarming failure rates Ray is referring to. As mentioned earlier, computerized systems of
significant sophistication routinely fly and land our airplanes automatically and monitor intensive care units in
hospitals, yet almost never malfunction. If alarming failure rates are of concern, they're more often attributable to
human error. Ray alludes to problems with Intel microprocessor chips, but these problems have been extremely subtle,
have caused almost no repercussions, and have quickly been rectified.
The complexity of computerized systems has indeed been scaling up, as we have seen, and moreover the cutting
edge of our efforts to emulate human intelligence will utilize the self-organizing paradigms that we find in the human
brain. As we continue our progress in reverse engineering the human brain, we will add new self-organizing methods
to our pattern recognition and AI toolkit. As I have discussed, self-organizing methods help to alleviate the need for
unmanageable levels of complexity. As I pointed out earlier, we will not need systems with "billions of lines of code"
to emulate human intelligence.
It is also important to point out that imperfection is an inherent feature of any complex process, and that certainly
includes human intelligence.
Do'stlaringiz bilan baham: