Test 4
M any people are also simply not fam iliar w ith m any instances o f A l actually
working, because it often happens in the background. Instead, they are acutely
aware o f instances where A l goes wrong. Em barrassing A l failures receive a
disproportionate am ount o f m edia attention, em phasising the m essage that we
cannot rely on technology. M achine
learning is not foolproof, in part because the
hum ans who design it aren ’t.
D
Feelings about A l run deep. In a recent experim ent, people from a range o f
backgrounds were given various sci-fi films about A l to w atch and then asked
questions about autom ation in everyday life. It was found that, regardless o f w hether
the film they watched depicted A l in a positive
or negative light, simply w atching
a cinem atic vision o f our technological future polarised the participants’ attitudes.
O ptim ists becam e m ore extreme in their enthusiasm for A l and sceptics becam e
even m ore guarded.
This suggests people use relevant evidence about A l in a biased m anner to support
their existing attitudes, a deep-rooted hum an tendency known as “confirm ation
bias” . As A l is represented m ore and m ore in m edia and entertainm ent, it could lead
to a society split between those who benefit from A l and those who reject it.
M ore
pertinently, refusing to accept the advantages offered by A l could place a large group
o f people at a serious disadvantage.
E
Fortunately, we already have some ideas about how to improve trust in A l. Simply
having previous experience with A l can significantly improve people’s opinions
about the technology, as was found in the study m entioned above. Evidence also
suggests the m ore you use other technologies such as the internet, the m ore you
trust them.
A nother solution may be to reveal m ore about the algorithm s w hich A l uses and
the purposes they serve. Several high-profile social m edia com panies
and online
m arketplaces already release transparency reports about governm ent requests and
surveillance disclosures. A sim ilar practice for A l could help people have a better
understanding o f the way algorithm ic decisions are made.
F
R esearch suggests that allowing people some control over A l decision-m aking could
also improve trust and enable A l to learn from hum an experience. For example,
one study showed that when people were allowed the freedom to slightly m odify an
algorithm , they felt m ore satisfied with its decisions, m ore likely to believe it was
superior and m ore likely to use it in the future.
We d o n ’t need to understand the intricate inner w orkings
o f A l systems, but if
people are given a degree o f responsibility for how they are implemented, they will
be m ore w illing to accept A l into their lives.
92