draft for the equities market, but not the tripling that you spoke about, Ray, due to the effect that George was
describing.
M
OLLY
2004:
Okay, I'm sorry I asked. I think I'll just hold on to the few shares I've got and not worry about it.
R
AY
:
What have you invested in?
M
OLLY
2004:
Let's see, there's this new natural language-based search-engine company that hopes to take on Google.
And I've also invested in a fuel-cell company. Also, a company building sensors that can travel in the
bloodstream.
R
AY
:
Sounds like a pretty high-risk, high-tech portfolio.
M
OLLY
2004:
I wouldn't call it a portfolio. I'm just dabbling with the technologies you're talking about.
R
AY
:
Okay, but keep in mind that while the trends predicted by the law of accelerating returns are remarkably smooth,
that doesn't mean we can readily predict which competitors will prevail.
M
OLLY
2004:
Right, that's why I'm spreading my bets.
C H A P T E R T H R E E
Achieving the Computational Capacity of the Human
Brain
As I discuss in Engines of Creation, if you can build genuine AI, there are reasons to believe that you can
build things like neurons that are a million times faster. That leads to the conclusion that you can make
systems that think a million times faster than a person. With AI, these systems could do engineering design.
Combining this with the capability of a system to build something that is better than it, you have the
possibility for a very abrupt transition. This situation may be more difficult to deal with even than
nanotechnology, but it is much more difficult to think about it constructively at this point. Thus, it hasn't been
the focus of things that I discuss, although I periodically point to it and say: "That's important too."
—E
RIC
D
REXLER
,
1989
The Sixth Paradigm of Computing Technology: Three-Dimensional
Molecular Computing and Emerging Computational Technologies
n the April 19, 1965, issue of Electronics, Gordon Moore wrote, "The future of integrated electronics is the
future of electronics itself. The advantages of integration will bring about a proliferation of electronics, pushing
this science into many new areas,"
1
With those modest words, Moore ushered in a revolution that is still
gaining momentum. To give his readers some idea of how profound this new science would be, Moore predicted that
"by 1975, economics may dictate squeezing as many as 65,000 components on a single silicon chip." Imagine that.
Moore's article described the repeated annual doubling of the number of transistors (used for computational
elements, or gates) that could be fitted onto an integrated circuit. His 1965 "Moore's Law" prediction was criticized at
the time because his logarithmic chart of the number of components on a chip had only five reference points (from
1959 through 1965), so projecting this nascent trend all the way out to 1975 was seen as premature. Moore's initial
estimate was incorrect, and he revised it downward a decade later. But the basic idea—the exponential growth of the
price-performance of electronics based on shrinking the size of transistors on an integrated circuit—was both valid and
prescient.
2
Today, we talk about billions of components rather than thousands. In the most advanced chips of 2004, logic
gates are only fifty nanometers wide, already well within the realm of nanotechnology (which deals with
measurements of one hundred nanometers or less). The demise of Moore's Law has been predicted on a regular basis,
but the end of this remarkable paradigm keeps getting pushed out in time. Paolo Gargini, Intel Fellow, director of Intel
technology strategy, and chairman of the influential International Technology Roadmap for Semiconductors (ITRS),
recently stated, "We see that for at least the next 15 to 20 years, we can continue staying on Moore's Law. In fact, ...
nanotechnology offers many new knobs we can turn to continue improving the number of components on a die.
3
The acceleration of computation has transformed everything from social and economic relations to political
institutions, as I will demonstrate throughout this book. But Moore did not point out in his papers that the strategy of
I
shrinking feature sizes was not, in fact, the first paradigm to bring exponential growth to computation and
communication. It was the fifth, and already, we can see the outlines of the next: computing at the molecular level and
in three dimensions. Even though we have more than a decade left of the fifth paradigm, there has already been
compelling progress in all of the enabling technologies required for the sixth paradigm. In the next section, I provide
an analysis of the amount of computation and memory required to achieve human levels of intelligence and why we
can be confident that these levels will be achieved in inexpensive computers within two decades. Even these very
powerful computers will be far from optimal, and in the last section of this chapter I'll review the limits of computation
according to the laws of physics as we understand them today. This will bring us to computers circa the late twenty-
first century.
Do'stlaringiz bilan baham: |