Attitudes towards Artificial Intelligence
A
Artificial intelligence (AI) can already predict the future. Police forces are using it
to map when and w here crim e is likely to occur. Doctors can use it to predict when a
patient is m ost likely to have a heart attack or stroke. Researchers are even trying to
give AI im agination so it can plan for unexpected consequences.
M any decisions in our lives require a good forecast, and AI is alm ost always better
at forecasting than we are. Yet for all these technological advances, we still seem to
deeply lack confidence in AI predictions. Recent cases show that people do n’t like
relying on AI and prefer to trust hum an experts, even if these experts are wrong.
If we want AI to really benefit people, we need to find a way to get people to trust
it. To do that, we need to understand why people are so reluctant to trust AI in the
first place.
В
Take the case o f W atson for Oncology, one o f technology giant IB M ’s
supercom puter programs. Their attem pt to prom ote this program to cancer doctors
was a PR disaster. The AI prom ised to deliver top-quality recom m endations on
the treatm ent o f 12 cancers that accounted for 80% o f the w orld’s cases. But when
doctors first interacted w ith Watson, they found them selves in a rather difficult
situation. On the one hand, if W atson provided guidance about a treatm ent that
coincided w ith their own opinions, physicians did not see m uch point in W atson’s
recom m endations. The supercom puter was simply telling them what they already
knew, and these recom m endations did not change the actual treatment.
On the other hand, if W atson generated a recom m endation that contradicted the
experts’ opinion, doctors would typically conclude that W atson w asn’t competent.
And the m achine w ouldn’t be able to explain why its treatm ent was plausible
because its m achine-learning algorithm s were simply too com plex to be fully
understood by hum ans. Consequently, this has caused even m ore suspicion
and disbelief, leading many doctors to ignore the seem ingly outlandish AI
recom m endations and stick to their own expertise.
С
This is ju st one exam ple o f peo p le’s lack o f confidence in AI and their reluctance to
accept what AI has to offer. Trust in other people is often based on our understanding
o f how others think and having experience o f their reliability. This helps create
a psychological feeling o f safety. AI, on the other hand, is still fairly new and
unfam iliar to m ost people. Even if it can be technically explained (and th a t’s not
always the case), A I’s decision-m aking process is usually too difficult for m ost
people to comprehend. And interacting with something we don’t understand can
cause anxiety and give us a sense that w e’re losing control.
91
Test 4
M any people are also simply not fam iliar w ith m any instances o f A l actually
working, because it often happens in the background. Instead, they are acutely
aware o f instances where A l goes wrong. Em barrassing A l failures receive a
disproportionate am ount o f m edia attention, em phasising the m essage that we
cannot rely on technology. M achine learning is not foolproof, in part because the
hum ans who design it aren ’t.
D
Feelings about A l run deep. In a recent experim ent, people from a range o f
backgrounds were given various sci-fi films about A l to w atch and then asked
questions about autom ation in everyday life. It was found that, regardless o f w hether
the film they watched depicted A l in a positive or negative light, simply w atching
a cinem atic vision o f our technological future polarised the participants’ attitudes.
O ptim ists becam e m ore extreme in their enthusiasm for A l and sceptics becam e
even m ore guarded.
This suggests people use relevant evidence about A l in a biased m anner to support
their existing attitudes, a deep-rooted hum an tendency known as “confirm ation
bias” . As A l is represented m ore and m ore in m edia and entertainm ent, it could lead
to a society split between those who benefit from A l and those who reject it. M ore
pertinently, refusing to accept the advantages offered by A l could place a large group
o f people at a serious disadvantage.
E
Fortunately, we already have some ideas about how to improve trust in A l. Simply
having previous experience with A l can significantly improve people’s opinions
about the technology, as was found in the study m entioned above. Evidence also
suggests the m ore you use other technologies such as the internet, the m ore you
trust them.
A nother solution may be to reveal m ore about the algorithm s w hich A l uses and
the purposes they serve. Several high-profile social m edia com panies and online
m arketplaces already release transparency reports about governm ent requests and
surveillance disclosures. A sim ilar practice for A l could help people have a better
understanding o f the way algorithm ic decisions are made.
F
R esearch suggests that allowing people some control over A l decision-m aking could
also improve trust and enable A l to learn from hum an experience. For example,
one study showed that when people were allowed the freedom to slightly m odify an
algorithm , they felt m ore satisfied with its decisions, m ore likely to believe it was
superior and m ore likely to use it in the future.
We d o n ’t need to understand the intricate inner w orkings o f A l systems, but if
people are given a degree o f responsibility for how they are implemented, they will
be m ore w illing to accept A l into their lives.
92
Do'stlaringiz bilan baham: |