6. AFFECTIVE REACTIONS ARE IN THE RIGHT PLACE AT THE
RIGHT TIME IN THE BRAIN
Damasio’s studies of brain-damaged patients show that the
emotional areas of the brain are the right places to be looking for the
foundations of morality, because losing them interferes with moral
competence. The case would be even stronger if these areas were
active at the right times. Do they become more active just before
someone makes a moral judgment or decision?
In 1999, Joshua Greene, who was then a graduate student in
philosophy at Princeton, teamed up with leading neuroscientist
Jonathan Cohen to see what actually happens in the brain as people
make moral judgments. He studied moral dilemmas in which two
major ethical principles seem to push against each other. For
example, you’ve probably heard of the famous “trolley dilemma,”
39
in which the only way you can stop a runaway trolley from killing
ve people is by pushing one person o a bridge onto the track
below.
Philosophers have long disagreed about whether it’s acceptable to
harm one person in order to help or save several people.
Utilitarianism is the philosophical school that says you should
always aim to bring about the greatest total good, even if a few
people get hurt along the way, so if there’s really no other way to
save those ve lives, go ahead and push. Other philosophers believe
that we have duties to respect the rights of individuals, and we must
not harm people in our pursuit of other goals, even moral goals such
as saving lives. This view is known as deontology (from the Greek
root that gives us our word duty). Deontologists talk about high
moral principles derived and justi ed by careful reasoning; they
would never agree that these principles are merely post hoc
rationalizations of gut feelings. But Greene had a hunch that gut
feelings were what often drove people to make deontological
judgments, whereas utilitarian judgments were more cool and
calculating.
To test his hunch, Greene wrote twenty stories that, like the
trolley story, involved direct personal harm, usually done for a good
reason. For example, should you throw an injured person out of a
lifeboat to keep the boat from sinking and drowning the other
passengers? All of these stories were written to produce a strong
negative a ective ash.
Greene also wrote twenty stories involving impersonal harm, such
as a version of the trolley dilemma in which you save the ve
people by ipping a switch that diverts the trolley onto a side track,
where it will kill just one person. It’s the same objective trade-o of
one life for ve, so some philosophers say that the two cases are
morally equivalent, but from an intuitionist perspective, there’s a
world of di erence.
40
Without that initial ash of horror (that bare-
handed push), the subject is free to examine both options and
choose the one that saves the most lives.
Greene brought eighteen subjects into an fMRI scanner and
presented each of his stories on the screen, one at a time. Each
person had to press one of two buttons to indicate whether or not it
was appropriate for a person to take the course of action described
—for example, to push the man or throw the switch.
The results were clear and compelling. When people read stories
involving personal harm, they showed greater activity in several
regions of the brain related to emotional processing. Across many
stories, the relative strength of these emotional reactions predicted
the average moral judgment.
Greene published this now famous study in 2001 in the journal
Science.
41
Since then, many other labs have put people into fMRI
scanners and asked them to look at photographs about moral
violations, make charitable donations, assign punishments for
crimes, or play games with cheaters and cooperators.
42
With few
exceptions, the results tell a consistent story: the areas of the brain
involved in emotional processing activate almost immediately, and
high activity in these areas correlates with the kinds of moral
judgments or decisions that people ultimately make.
43
In an article titled “The Secret Joke of Kant’s Soul,” Greene
summed up what he and many others had found.
44
Greene did not
know what E. O. Wilson had said about philosophers consulting
their “emotive centers” when he wrote the article, but his
conclusion was the same as Wilson’s:
We have strong feelings that tell us in clear and
uncertain terms that some things simply cannot be done
and that other things simply must be done. But it’s not
obvious how to make sense of these feelings, and so we,
with the help of some especially creative philosophers,
make up a rationally appealing story [about rights].
This is a stunning example of consilience. Wilson had prophesied
in 1975 that ethics would soon be “biologicized” and refounded as
the interpretation of the activity of the “emotive centers” of the
brain. When he made that prophecy he was going against the
dominant views of his time. Psychologists such as Kohlberg said that
the action in ethics was in reasoning, not emotion. And the political
climate was harsh for people such as Wilson who dared to suggest
that evolutionary thinking was a valid way to examine human
behavior.
Yet in the thirty-three years between the Wilson and Greene
quotes, everything changed. Scientists in many elds began
recognizing the power and intelligence of automatic processes,
including emotion.
45
Evolutionary psychology became respectable,
not in all academic departments but at least among the
interdisciplinary community of scholars that now studies morality.
46
In the last few years, the “new synthesis” that Wilson predicted back
in 1975 has arrived.
Do'stlaringiz bilan baham: |