We Are Bad Algorithms
Here’s one last way to look at the history of the world:
The difference between life and stuff is that life is stuff that self-replicates. Life is made out
of cells and DNA that spawn more and more copies of themselves.
Over the course of hundreds of millions of years, some of these primordial life forms
developed feedback mechanisms to better reproduce themselves. An early protozoon might
evolve little sensors on its membrane to better detect amino acids by which to replicate more
copies of itself, thus giving it an advantage over other single-cell organisms. But then maybe
some other single-cell organism develops a way to “trick” other little amoeba-like things’
sensors, thus interfering with their ability to find food, and giving itself an advantage.
Basically, there’s been a biological arms race going on since the beginning of forever. This
little single-cell thing develops a cool strategy to get more material to replicate itself than do
other single-cell organisms, and therefore it wins the resources and reproduces more. Then
another little single-cell thing evolves and has an even better strategy for getting food, and it
proliferates. This continues, on and on, for billions of years, and pretty soon you have lizards that
can camouflage their skin and monkeys that can fake animal sounds and awkward middle-aged
divorced men spending all their money on bright red Chevy Camaros even though they can’t
really afford them—all because it promotes their survival and ability to reproduce.
This is the story of evolution—survival of the fittest and all that.
But you could also look at it a different way. You could call it “survival of the best
information processing.”
Okay, not as catchy, perhaps, but it actually might be more accurate.
See, that amoeba that evolves sensors on its membrane to better detect amino acids—that is,
at its core, a form of information processing. It is better able than other organisms to detect the
facts of its environment. And because it developed a better way to process information than other
blobby cell-like things, it won the evolutionary game and spread its genes.
Similarly, the lizard that can camouflage its skin—that, too, has evolved a way to manipulate
visual information to trick predators into ignoring it. Same story with the monkeys faking animal
noises. Same deal with the desperate middle-aged dude and his Camaro (or maybe not).
Evolution rewards the most powerful creatures, and power is determined by the ability to
access, harness, and manipulate information effectively. A lion can hear its prey over a mile
away. A buzzard can see a rat from an altitude of three thousand feet. Whales develop their own
personal songs and can communicate up to a hundred miles away from each other while
underwater. These are all examples of exceptional information-processing capabilities, and that
ability to receive and process information is linked to these creatures’ ability to survive and
reproduce.
Physically, humans are pretty unexceptional. We are weak, slow, and frail, and we tire
easily.
11
But we are nature’s ultimate information processors. We are the only species that can
conceptualize the past and future, that can deduce long chains of cause and effect, that can plan
and strategize in abstract terms, that can build and create and problem-solve in perpetuity.
12
Out
of millions of years of evolution, the Thinking Brain (Kant’s sacred conscious mind) is what has,
in a few short millennia, dominated the entire planet and called into existence a vast, intricate
web of production, technology, and networks.
That’s because we are algorithms. Consciousness itself is a vast network of algorithms and
decision trees—algorithms based on values and knowledge and hope.
Our algorithms worked pretty well for the first few hundred thousand years. They worked
well on the savannah, when we were hunting bison and living in small nomadic communities and
never met more than thirty people in our entire lives.
But in a globally networked economy of billions of people, stocked with thousands of nukes
and Facebook privacy violations and holographic Michael Jackson concerts, our algorithms kind
of suck. They break down and enter us into ever-escalating cycles of conflict that, by the nature
of our algorithms, can produce no permanent satisfaction, no final peace.
It’s like that brutal advice you sometimes hear, that the only thing all your fucked-up
relationships have in common is you. Well, the only thing that all the biggest problems in the
world have in common is us. Nukes wouldn’t be a problem if there weren’t some dumb fuck
sitting there tempted to use them. Biochemical weapons, climate change, endangered species,
genocide—you name it, none of it was an issue until we came along.
13
Domestic violence, rape,
money laundering, fraud—it’s all us.
Life is fundamentally built on algorithms. We just happen to be the most sophisticated and
complex algorithms nature has yet produced, the zenith of about one billion years’ worth of
evolutionary forces. And now we are on the cusp of producing algorithms that are exponentially
better than we are.
Despite all our accomplishments, the human mind is still incredibly flawed. Our ability to
process information is hamstrung by our emotional need to validate ourselves. It is curved
inward by our perceptual biases. Our Thinking Brain is regularly hijacked and kidnapped by our
Feeling Brain’s incessant desires—stuffed in the trunk of the Consciousness Car and often
gagged or drugged into incapacitation.
And as we’ve seen, our moral compass too frequently gets swung off course by our
inevitable need to generate hope through conflict. As the moral psychologist Jonathan Haidt put
it, “morality binds and blinds.”
14
Our Feeling Brains are antiquated, outdated software. And
while our Thinking Brains are decent, they’re too slow and clunky to be of much use anymore.
Just ask Garry Kasparov.
We are a self-hating, self-destructive species.
15
That is not a moral statement; it’s simply a
fact. This internal tension we all feel, all the time? That’s what got us here. It’s what got us to
this point. It’s our arms race. And we’re about to hand over the evolutionary baton to the
defining information processors of the next epoch: the machines.
When Elon Musk was asked what the most imminent threats to humanity were, he quickly said
there were three: first, wide-scale nuclear war; second, climate change—and then, before naming
the third, he fell silent. His face became sullen. He looked down, deep in thought. When the
interviewer asked him, “What is the third?” He smiled and said, “I just hope the computers
decide to be nice to us.”
There is a lot of fear out there that AI will wipe away humanity. Some suspect this might
happen in a dramatic Terminator 2–type conflagration. Others worry that some machine will kill
us off by “accident,” that an AI designed to innovate better ways to make toothpicks will
somehow discover that harvesting human bodies is the best way.
16
Bill Gates, Stephen Hawking,
and Elon Musk are just a few of the leading thinkers and scientists who have crapped their pants
at how rapidly AI is developing and how underprepared we are as a species for its repercussions.
But I think this fear is a bit silly. For one, how do you prepare for something that is vastly
more intelligent than you are? It’s like training a dog to play chess against . . . well, Kasparov.
No matter how much the dog thinks and prepares, it’s not going to matter.
More important, the machines’ understanding of good and evil will likely surpass our own.
As I write this, five different genocides are taking place in the world.
17
Seven hundred ninety-
five million people are starving or undernourished.
18
By the time you finish this chapter, more
than a hundred people, just in the United States, will be beaten, abused, or killed by a family
member, in their own home.
19
Are there potential dangers with AI? Sure. But morally speaking, we’re throwing rocks
inside a glass house here. What do we know about ethics and the humane treatment of animals,
the environment, and one another? That’s right: pretty much nothing. When it comes to moral
questions, humanity has historically flunked the test, over and over again. Superintelligent
machines will likely come to understand life and death, creation and destruction, on a much
higher level than we ever could on our own. And the idea that they will exterminate us for the
simple fact that we aren’t as productive as we used to be, or that sometimes we can be a
nuisance, I think, is just projecting the worst aspects of our own psychology onto something we
don’t understand and never will.
Or, here’s an idea: What if technology advances to such a degree that it renders individual
human consciousness arbitrary? What if consciousness can be replicated, expanded, and
contracted at will? What if removing all these clunky, inefficient biological prisons we call
“bodies,” or all these clunky, inefficient psychological prisons we call “individual identities,”
results in far more ethical and prosperous outcomes? What if the machines realize we’d be much
happier being freed from our cognitive prisons and having our perception of our own identities
expanded to include all perceivable reality? What if they think we’re just a bunch of drooling
idiots and keep us occupied with perfect virtual reality porn and amazing pizza until we all die
off by our own mortality?
Who are we to know? And who are we to say?
Nietzsche wrote his books just a couple of decades after Darwin’s On the Origin of Species was
published in 1859. By the time Nietzsche came onto the scene, the world was reeling from
Darwin’s magnificent discoveries, trying to process and make sense of their implications.
And while the world was freaking out about whether humans really evolved from apes or
not, Nietzsche, as usual, looked in the opposite direction of everyone else. He took it as obvious
that we evolved from apes. After all, he said, why else would we be so horrible to one another?
Instead of asking what we evolved from, Nietzsche instead asked what we were evolving
toward.
Nietzsche said that man was a transition, suspended precariously on a rope between two
ledges, with beasts behind us and something greater in front of us. His life’s work was dedicated
to figuring out what that something greater might be and then pointing us toward it.
Nietzsche envisioned a humanity that transcended religious hopes, that extended itself
“beyond good and evil,” and rose above the petty quarrels of contradictory value systems. It is
these value systems that fail us and hurt us and keep us down in the emotional holes of our own
creation. The emotional algorithms that exalt life and make it soar in blistering joy are the same
forces that unravel us and destroy us, from the inside out.
So far, our technology has exploited the flawed algorithms of our Feeling Brain. Technology
has worked to make us less resilient and more addicted to frivolous diversions and pleasures,
because these diversions are incredibly profitable. And while technology has liberated much of
the planet from poverty and tyranny, it has produced a new kind of tyranny: a tyranny of empty,
meaningless variety, a never-ending stream of unnecessary options.
It has also armed us with weapons so devastating that we could torpedo this whole
“intelligent life” experiment ourselves if we’re not careful.
I believe artificial intelligence is Nietzsche’s “something greater.” It is the Final Religion, the
religion that lies beyond good and evil, the religion that will finally unite and bind us all, for
better or worse.
It is, then, simply our job not to blow ourselves up before we get there.
And the only way to do that is to adapt our technology for our flawed psychology rather than
to exploit it.
To create tools that promote greater character and maturity in our cultures rather than
diverting us from growth.
To enshrine the virtues of autonomy, liberty, privacy, and dignity not just in our legal
documents but also in our business models and our social lives.
To treat people not merely as means but also as ends, and more important, to do it at scale.
To encourage antifragility and self-imposed limitation in each of us, rather than protecting
everyone’s feelings.
To create tools to help our Thinking Brain better communicate and manage the Feeling
Brain, and to bring them into alignment, producing the illusion of greater self-control.
Look, it may be that you came to this book looking for some sort of hope, an assurance that
things will get better—do this, that, and the other thing, and everything will improve.
I am sorry. I don’t have that kind of answer for you. Nobody does. Because even if all the
problems of today get magically fixed, our minds will still perceive the inevitable fuckedness of
tomorrow.
So, instead of looking for hope, try this:
Don’t hope.
Don’t despair, either.
In fact, don’t deign to believe you know anything. It’s that assumption of knowing with such
blind, fervent, emotional certainty that gets us into these kinds of pickles in the first place.
Don’t hope for better. Just be better.
Be something better. Be more compassionate, more resilient, more humble, more disciplined.
Many people would also throw in there “Be more human,” but no—be a better human. And
maybe, if we’re lucky, one day we’ll get to be more than human.
Do'stlaringiz bilan baham: |