Human rights in the age of


HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE



Download 390,6 Kb.
Pdf ko'rish
bet9/45
Sana09.06.2023
Hajmi390,6 Kb.
#950086
1   ...   5   6   7   8   9   10   11   12   ...   45
Bog'liq
AI-and-Human-Rights

HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
IV. WHAT MAKES THE RISKS OF AI DIFFERENT?
Many of the problems and risks explored in this report are not new. So how is AI different from the technologies 
that have come before it? Due to the ways AI has evolved from existing technologies, including in terms of 
both sophistication and scale, AI may exacerbate existing questions and introduce new problems to consider, 
with huge impacts for accountability and reliability. To illustrate this, consider two recent tech trends: big data 
and the rise of algorithmic decision-making. 
Today, algorithmic decision-making is largely digital. In many cases it employs statistical methods similar to 
those used to create the pen-and-paper sentencing algorithm that we discussed above. Before AI, algorithms 
were deterministic—that is, pre-programmed and unchanging. Because they are based in statistical modeling, 
these algorithms suffer from the same problems as traditional statistics, such as poorly sampled data, 
biased data, and measurement errors. But because they are pre-programmed, the recommendations they 
make can be traced. 
The use of AI in algorithmic decision-making has introduced a new set of challenges. Because machine 
learning algorithms use statistics, they also have the same problems with biased data and measurement 
error as their deterministic predecessors. However, ML systems differ in a few key ways. First, whereas 
traditional statistical modeling is about creating a simple model in the form of an equation, machine 
learning is much more fine-tuned. It captures a multitude of patterns that cannot be expressed in a single 
equation. Second, unlike deterministic algorithms, machine learning algorithms calibrate themselves. 
Because they identify so many patterns, they are too complex for humans to understand, and thus it is 
not possible to trace the decisions or recommendations they make.
29
In addition, many machine learning 
algorithms constantly re-calibrate themselves through feedback. An example of this are e-mail spam filters, 
which continually learn and improve their spam detection capabilities as users mark email as spam. 
Another issue is the impact of error rates. Because of their statistical basis, all ML systems have error 
rates. Even though in many cases ML systems are far more accurate than human beings, there is danger 
in assuming that simply because a system’s predictions are more accurate than a human’s, the outcome is 
necessarily better. Even if the error rate is close to zero, in a tool with millions of users, thousands could be 
affected by error rates. Consider the example of Google Photos. In 2015 Google Photos’ image recognition 
software was found to have a terribly prejudicial and offensive error: it was occasionally labeling photos of 
black people as gorillas. Because the system used a complex ML model, engineers were unable to figure out 
why this was happening. The only “solution” they could work out to this “racist” ML was merely a band-aid: 
they removed any monkey-related words from the list of image tags. 
30
Now, imagine a similar software system used by U.S Customs and Border Patrol that photographs every 
person who enters and exits the U.S. and cross-references it with a database of photos of known or suspected 
criminals and terrorists. In 2016, an estimated 75.9 million people arrived in the United States.
31
Even if 
the facial recognition system was 99.9% accurate, the 0.1% error rate would result in 75,900 people being 
misidentified. How many of these people would be falsely identified as wanted criminals and detained? And 
what would the impact be on their lives? Conversely, how many known criminals would get away? Even 
relatively narrow error rates in cases such as these can have severe consequences.
29 “What Is The Difference Between Machine Learning & Statistical Modeling,” accessed May 12, 2018, https://www.analyticsvidhya.com/blog/2015/07/
difference-machine-learning-statistical-modeling/.
30 “When It Comes to Gorillas, Google Photos Remains Blind | WIRED,” accessed May 13, 2018, https://www.wired.com/story/when-it-comes-to-gorillas-
google-photos-remains-blind/.
31 https://www.ustravel.org/answersheet


accessnow.org
14

Download 390,6 Kb.

Do'stlaringiz bilan baham:
1   ...   5   6   7   8   9   10   11   12   ...   45




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish