Runaway AI.
Once strong AI is achieved, it can readily be advanced and its powers multiplied, as that is the
fundamental nature of machine abilities. As one strong AI immediately begets many strong Als, the latter access their
own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI,
with the cycle repeating itself indefinitely. Each cycle not only creates a more intelligent AI but takes less time than
the cycle before it, as is the nature of technological evolution (or any evolutionary process). The premise is that once
strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating superintelligence.
160
My own view is only slightly different. The logic of runaway AI is valid, but we still need to consider the timing.
Achieving human levels in a machine will not immediately cause a runaway phenomenon. Consider that a human level
of intelligence has limitations. We have examples of this today—about six billion of them. Consider a scenario in
which you took one hundred humans from, say, a shopping mall. This group would constitute examples of reasonably
well-educated humans. Yet if this group was presented with the task of improving human intelligence, it wouldn't get
very far, even if provided with the templates of human intelligence. It would probably have a hard time creating a
simple computer. Speeding up the thinking and expanding the memory capacities of these one hundred humans would
not immediately solve this problem.
I pointed out above that machines will match (and quickly exceed) peak human skills in each area of skill. So
instead, let's take one hundred scientists and engineers. A group of technically trained people with the right
backgrounds would be capable of improving accessible designs. If a machine attained equivalence to one hundred (and
eventually one thousand, then one million) technically trained humans, each operating much faster than a biological
human, a rapid acceleration of intelligence would ultimately follow.
However, this acceleration won't happen immediately when a computer passes the Turing test. The Turing test is
comparable to matching the capabilities of an average, educated human and thus is closer to the example of humans
from a shopping mall. It will take time for computers to master all of the requisite skills and to marry these skills with
all the necessary knowledge bases.
Once we've succeeded in creating a machine that can pass the Turing test (around 2029), the succeeding period
will be an era of consolidation in which nonbiological intelligence will make rapid gains. However, the extraordinary
expansion contemplated for the Singularity, in which human intelligence is multiplied by billions, won't take place
until the mid-2040s (as discussed in chapter 3).
Do'stlaringiz bilan baham: |