HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
Right to privacy
Surveillance drones or other robots have long been used by the military, and they are now increasingly
used by law enforcement or non-state actors as well. When equipped with AI-powered technology, such
as facial recognition technology, and made to be semi- or fully autonomous—for example, used to follow a
certain group or a person independently—such drones could deepen the impact of widespread and invasive
surveillance that violates the “necessary and proportionate” principles that govern state surveillance.
Right to work
AI-powered robots can enable job automation, and thus they can threaten the right to work in the ways we
explore above.
Right to education
Although still in nascent stages, the use of robotics in education is an active research area. This includes
robots used for tasks like teaching second languages in primary schools, and for storytelling.
111
As with
more general AI, the risks posed by AI-powered robots have to do with outcomes that violate equal access.
For example, in areas where robots replace teachers in schools, students would receive a different kind of
education than those with human teachers, and that may constitute a violation of equal access.
VII. RECOMMENDATIONS: HOW TO ADDRESS AI-RELATED
HUMAN-RIGHTS HARMS
Swift action now to deal with human rights risks can help prevent the foreseeable detrimental impacts of AI,
while providing space and a framework for addressing the problems we cannot predict. Because AI is such
a large and diverse field, any approach will need to be sector-specific to some extent. However, four broad
policy approaches could address many of the human rights risks posed by AI.
1. Comprehensive data protection legislation can anticipate and mitigate many of the human rights risks
posed by AI. However, because it is specific to data, additional measures are also necessary.
2. Government use of AI should be governed by a high standard, including open procurement standards,
human rights impact assessments, full transparency, and explainability and accountability processes.
3. Given the private sector’s duty to respect and uphold human rights, companies should go beyond
establishing internal ethics policies and develop transparency, explainability, and accountability processes.
4. Significantly more research should be conducted into the potential human rights harms of AI systems and
investment should be made in creating structures to respond to these risks.
Do'stlaringiz bilan baham: |