destroy all human life many times over.
21
Although attracting relatively little public discussion, the massive opposing
ICBM arsenals of the United States and Russia remain in place, despite the apparent thawing of relations.
Nuclear proliferation and the widespread availability of nuclear materials and know-how is another grave concern,
although not an existential one for our civilization. (That is, only an all-out thermonuclear war involving the ICBM
arsenals poses a risk to survival of all humans.) Nuclear proliferation and nuclear terrorism belong to the "profound-
local" category of risk, along with genocide. However, the concern is certainly severe because the logic of mutual
assured destruction does not work in the context of suicide terrorists.
Debatably we've now added another existential risk, which is the possibility of a bioengineered virus that spreads
easily, has a long incubation period, and delivers an ultimately deadly payload. Some viruses are easily communicable,
such as the flu and common cold. Others are deadly, such as HIV. It is rare for a virus to combine both attributes.
Humans living today are descendants of those who developed natural immunities to most of the highly communicable
viruses. The ability of the species to survive viral outbreaks is one advantage of sexual reproduction, which tends to
ensure genetic diversity in the population, so that the response to specific viral agents is highly variable. Although
catastrophic, bubonic plague did not kill everyone in Europe. Other viruses, such as smallpox, have both negative
characteristics—they are easily contagious and deadly—but have been around long enough that there has been time for
society to create a technological protection in the form of a vaccine. Gene engineering, however, has the potential to
bypass these evolutionary protections by suddenly introducing new pathogens for which we have no protection,
natural or technological.
The prospect of adding genes for deadly toxins to easily transmitted, common viruses such as the common cold
and flu introduced another possible existential-risk scenario. It was this prospect that led to the Asilomar conference to
consider how to deal with such a threat and the subsequent drafting of a set of safety and ethics guidelines. Although
these guidelines have worked thus far, the underlying technologies for genetic manipulation are growing rapidly in
sophistication.
In 2003 the world struggled, successfully, with the SARS virus. The emergence of SARS resulted from a
combination of an ancient practice (the virus is suspected of having jumped from exotic animals, possibly civet cats, to
humans living in close proximity) and a modern practice (the infection spread rapidly across the world by air travel).
SARS provided us with a dry run of a virus new to human civilization that combined easy transmission, the ability to
survive for extended periods of time outside the human body, and a high degree of mortality, with death rates
estimated at 14 to 20 percent. Again, the response combined ancient and modern techniques.
Our experience with SARS shows that most viruses, even if relatively easily transmitted and reasonably deadly,
represent grave but not necessarily existential risks. SARS, however, does not appear to have been engineered. SARS
spreads easily through externally transmitted bodily fluids but is not easily spread through airborne particles. Its
incubation period is estimated to range from one day to two weeks, whereas a longer incubation period would allow a
THE virus to spread through several exponentially growing generations before carriers are identified.
22
SARS is deadly, but the majority of its victims do survive. It continues to be feasible for a virus to be
malevolently engineered so it spreads more easily than SARS, has an extended incubation period, and is deadly to
essentially all victims. Smallpox is close to having these characteristics. Although we have a vaccine (albeit a crude
one), the vaccine would not be effective against genetically modified versions of the virus.
As I describe below, the window of malicious opportunity for bioengineered viruses, existential or otherwise, will
close in the 2020s when we have fully effective antiviral technologies based on nanobots.
23
However, because
nanotechnology will be thousands of times stronger, faster, and more intelligent than biological entities, self-
replicating nanobots will present a greater risk and yet another existential risk. The window for malevolent nanobots
will ultimately be closed by strong artificial intelligence, but, not surprisingly, "unfriendly" AI will itself present an
even more compelling existential risk, which I discuss below (see p. 420).
Do'stlaringiz bilan baham: