Just look at the trendiest fields in computer science today. The very term “machine
learning” evokes imagery of replacement, and its boosters seem to believe that
computers can be taught to perform almost any task, so
long as we feed them enough
training data. Any user of Netflix or Amazon has experienced the results of machine
learning firsthand: both companies use algorithms to recommend products based on your
viewing and purchase history. Feed them more data and the recommendations get ever
bet t er. Google Translate works the same way, providing
rough but serviceable
translations into any of the 80 languages it supports—not because the software
understands human language, but because it has extracted patterns through statistical
analysis of a huge corpus of text.
The other buzzword that epitomizes a bias toward substitution is “big data.” Today’s
companies have an
insatiable appetite for data, mistakenly believing that more data
always creates more value. But big data is usually dumb data. Computers can find
patterns that elude humans, but they don’t know how to compare patterns from different
sources or how to interpret complex behaviors. Actionable insights can only come from a
human analyst (or the kind of generalized artificial intelligence that exists only in
science fiction).
We have let ourselves become enchanted by big data only because we exoticize
technology. We’re impressed with small feats accomplished by computers alone, but we
ignore big achievements from complementarity because
the human contribution makes
them less uncanny. Watson, Deep Blue, and ever-better machine learning algorithms are
cool. But the most valuable companies in the future won’t ask what problems can be
solved with computers alone. Instead, they’ll ask:
how can computers help humans solve
hard problems?
EVER-SMARTER COMPUTERS: FRIEND OR FOE?
The future of computing is necessarily full of unknowns. It’s become conventional to see
ever-smarter anthropomorphized robot intelligences like Siri and Watson as harbingers
of things to come; once computers can answer all our questions, perhaps they’ll ask why
they should remain subservient to us at all.
The logical endpoint to this substitutionist thinking is called “strong AI”: computers
that eclipse humans on every important dimension. Of course, the Luddites are terrified
by the possibility. It even makes the futurists a little uneasy; it’s not clear whether strong
AI would save humanity or doom it.
Technology is supposed to
increase
our mastery
over nature and
reduce
the role of chance in our lives;
building smarter-than-human
computers could actually bring chance back with a vengeance. Strong AI is like a cosmic
lottery ticket: if we win, we get utopia; if we lose, Skynet substitutes us out of existence.
But even if strong AI is a real possibility rather than an imponderable mystery, it
won’t happen anytime soon: replacement by computers is a worry for the 22nd century.
Indefinite fears about the far future shouldn’t stop us from making definite plans today.
Luddites claim that we shouldn’t build the computers that might replace people
someday; crazed futurists argue that we should. These two positions are mutually
exclusive but they are not exhaustive: there is room in between for sane people to build a
vastly better world in the decades ahead. As we
find new ways to use computers, they
won’t just get better at the kinds of things people already do; they’ll help us to do what
was previously unimaginable.
SEEING GREEN
A
T THE START
of the 21st century, everyone agreed that the next big thing was clean
technology. It had to be: in Beijing, the smog had gotten so bad that people couldn’t see
from building to building—even breathing was a health risk. Bangladesh, with its
arsenic-laden
water wells, was suffering what the
Do'stlaringiz bilan baham: