Another example is the ability to send information on unconditioned phone lines, which has improved from 300
bits per second to 56,000 bps in twelve years, a 55 percent annual increase.
11
Some of this improvement was the result
of improvements in hardware design, but most of it is a function of algorithmic innovation.
One of the key processing problems is converting a signal into its frequency components using Fourier
transforms, which express signals as sums of sine waves. This method is used in the front end of computerized speech
recognition and in many other applications. Human auditory perception also starts by breaking the speech signal into
frequency components in the cochlea. The 1965 "radix-2 Cooley-Tukey algorithm" for a "fast Fourier transform"
reduced the number of operations required for a 1,024-point Fourier transform by about two hundred.
12
An improved
"radix-a" method further boosted the improvement to eight hundred. Recently "wavelet" transforms have been
introduced, which are able to express arbitrary signals as sums of waveforms more complex than sine waves. These
methods provide further dramatic increases in the efficiency of breaking down a signal into its key components.
The examples above are not anomalies; most computationally intensive "core" algorithms have undergone
significant reductions in the number of operations required. Other examples include sorting, searching, autocorrelation
(and other statistical methods), and information compression and decompression. Progress has also been made in
parallelizing algorithms—that is, breaking a single method into multiple methods that can be performed
simultaneously. As I discussed earlier, parallel processing inherently runs at a lower temperature. The brain uses
massive parallel processing as one strategy to achieve more complex functions and faster reaction times, and we will
need to utilize this approach in our machines to achieve optimal computational densities.
There is an inherent difference between the improvements in hardware price-performance and improvements in
software efficiencies. Hardware improvements have been remarkably consistent and predictable. As we master each
new level of speed and efficiency in hardware we gain powerful tools to continue to the next level of exponential
improvement. Software improvements, on the other hand, are less predictable. Richards and Shaw call them "worm-
holes in development time," because we can often achieve the equivalent of years of hardware improvement through a
single algorithmic improvement. Note that we do not rely on ongoing progress in software efficiency, since we can
count on the ongoing acceleration of hardware. Nonetheless, the benefits from algorithmic breakthroughs contribute
significantly to achieving the overall computational power to emulate human intelligence, and they are likely to
continue to accrue.
Do'stlaringiz bilan baham: