Understanding the methods of the human brain will help us to design similar biologically inspired machines. Another
important application will be to actually interface our brains with computers, which I believe will become an
increasingly intimate merger in the decades ahead.
Already the Defense Advanced Research Projects Agency is spending $24 million per year on investigating direct
interfaces between brain and computer. As described above (see the section "The Visual System" on p. 185), Tomaso
Poggio and James DiCarlo at MIT, along with Christof Koch at the California Institute of Technology (Caltech), are
attempting to develop models of the recognition of visual objects and how this information is encoded. These could
eventually be used to transmit images directly into our brains.
Miguel Nicolelis and his colleagues at Duke University implanted sensors in the
brains of monkeys, enabling the
animals to control a robot through thought alone. The first step in the experiment involved teaching the monkeys to
control a cursor on a screen with a joystick. The scientists collected a pattern of signals from EEGs (brain sensors) and
subsequently caused the cursor to respond to the appropriate patterns rather than physical movements of the joystick.
The monkeys quickly learned that the joystick was no longer operative and that they could control the cursor just by
thinking. This "thought detection" system was then hooked up to a robot, and the monkeys were able to learn how to
control the robot's movements with their thoughts alone. By getting visual feedback on the robot's performance, the
monkeys were able to perfect their thought control over the robot. The goal of this research
is to provide a similar
system for paralyzed humans that will enable them to control their limbs and environment.
A key challenge in connecting neural implants to biological neurons is that the neurons generate glial cells, which
surround a "foreign" object in an attempt to protect the brain. Ted Berger and his colleagues are developing special
coatings that will appear to be biological and therefore attract rather than repel nearby neurons.
Another approach being pursued by the Max Planck Institute for Human Cognitive and Brain Sciences in Munich
is directly interfacing nerves and electronic devices. A chip created by Infineon allows neurons to grow on a special
substrate that provides direct contact between nerves and electronic sensors and stimulators. Similar work on a
"neurochip" at Caltech
has demonstrated two-way, noninvasive communication between neurons and electronics.
117
We have already learned how to interface surgically installed neural implants. In cochlear (inner-ear) implants it
has been found that the auditory nerve reorganizes itself to correctly interpret the multichannel signal from the
implant. A similar process appears to take place with the deep-brain stimulation implant used for Parkinson's patients.
The biological neurons in the vicinity of this FDA-approved brain implant receive signals from the electronic device
and respond just as if they had received signals from the biological neurons that were once functional. Recent versions
of the Parkinson's-disease implant provide the ability to download upgraded software directly to the implant from
outside the patient.
The Accelerating Pace of Reverse Engineering the Brain
Homo sapiens,
the first truly free species, is about to decommission natural selection, the force that made
us....[S]oon we must look deep within ourselves and decide what we wish to become.
—E.
O.
W
ILSON
,
C
ONSILIENCE
:
T
HE
U
NITY OF
K
NOWLEDGE
,
1998
We know what we are, but know not what we may be.
—W
ILLIAM
S
HAKESPEARE
The most important thing is this: Tobe able at any moment to sacrifice what we are for what we could
become.
—C
HARLES
D
UBOIS
Some observers have expressed concern that as we develop models, simulations, and extensions to the human brain we
risk not really understanding what we are tinkering with and the delicate balances involved. Author W. French
Anderson writes:
We may be like the young boy who loves to take things apart. He is bright enough to disassemble a watch,
and maybe even bright enough to get it back together so that it works. But what if he tries to "improve" it? ...
The boy can understand what is visible, but he cannot understand the precise engineering
calculations that
determine exactly how strong each spring should be....Attempts on his part to improve the watch will
probably only harm it....I fear ... we, too, do not really understand what makes the [lives] we are tinkering
with tick.
118
Anderson's concern, however, does not reflect the scope of the broad and painstaking effort by tens of thousands
of brain and computer scientists to methodically test out the limits and capabilities of models and simulations before
taking them to the next step. We are not attempting to disassemble and reconfigure the brain's trillions of parts without
a detailed analysis at each stage. The process of understanding the principles of operation of the brain is proceeding
through a series of increasingly sophisticated models derived from increasingly accurate and high-resolution data.
As the computational power to emulate the human brain approaches—we're almost there with supercomputers—
the efforts to scan and sense the human brain and to build working models and simulations of it are accelerating. As
with every other
projection in this book, it is critical to understand the exponential nature of progress in this field. I
frequently encounter colleagues who argue that it will be a century or longer before we can understand in detail the
methods of the brain. As with so many long-term scientific projections, this one is based on a linear view of the future
and ignores the inherent acceleration of progress, as well as the exponential growth of each underlying technology.
Such overly conservative views are also frequently based on an underestimation of the breadth of contemporary
accomplishments, even by practitioners in the field.
Scanning and sensing tools are doubling their overall spatial and temporal resolution each year. Scanning-
bandwidth, price-performance, and image-reconstruction times are also seeing comparable exponential growth. These
trends hold true for all of the forms of scanning: fully noninvasive scanning, in vivo scanning with an
exposed skull,
and destructive scanning. Databases of brain-scanning information and model building are also doubling in size about
once per year.
We have demonstrated that our ability to build detailed models and working simulations of subcellular portions,
neurons, and extensive neural regions follows closely upon the availability of the requisite tools and data. The
performance of neurons and subcellular portions of neurons often involves substantial complexity and numerous
nonlinearities, but the performance of neural clusters and neuronal regions is often simpler than their constituent parts.
We have increasingly powerful mathematical tools, implemented in effective computer software, that are able to
accurately model these types of complex hierarchical, adaptive, semirandom, self-organizing, highly nonlinear
systems. Our success to date in effectively modeling several important regions of the brain shows the effectiveness of
this approach.
The generation of scanning tools now emerging will for the first time provide spatial
and temporal resolution
capable of observing in real time the performance of individual dendrites, spines, and synapses. These tools will
quickly lead to a new generation of higher-resolution models and simulations.
Once the nanobot era arrives in the 2020s we will be able to observe all of the relevant features of neural
performance with very high resolution from inside the brain itself. Sending billions of nanobots through its capillaries
will enable us to noninvasively scan an entire working brain in real time. We have already created effective (although
still incomplete) models of extensive regions of the brain with today's relatively crude tools. Within twenty years, we
will have at least a millionfold increase in computational power and vastly improved scanning resolution and
bandwidth. So we can have confidence that we will have the data-gathering and computational tools needed by the
2020s to model
and simulate the entire brain, which will make it possible to combine the principles of operation of
human intelligence with the forms of intelligent information processing that we have derived from other AI research.
We will also benefit from the inherent strength of machines in storing, retrieving, and quickly sharing massive
amounts of information. We will then be in a position to implement these powerful hybrid systems on computational
platforms that greatly exceed the capabilities of the human brain's relatively fixed architecture.
Do'stlaringiz bilan baham: