HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
Even systems procured and implemented transparently and with stakeholder input may still the risk of
significant human rights violations.
120
To address this, there should be a process that allows the public
to contest the use of an AI system in its entirety.
Recommendations for Private-Sector and Non-State Use of AI
Private-sector actors also have a responsibility to respect human rights, independent of state obligations. To
meet their duty, private-sector actors must take ongoing steps to ensure they do not cause or contribute to
human rights abuses.
121
The establishment of AI ethics policies by many of the large private-sector players
is laudable, but a human rights impact assessment should be integrated into larger ethics review processes.
Additionally, private-sector actors should pursue transparency and explainability measures, as well as
establish procedures to ensure accountability and access to remedy. Collectively, this human rights due
diligence, informed by expert stakeholders, assists companies in preventing and mitigating abuses. However,
firms must also meet their obligations to redress harms directly or indirectly resulting from their operations,
via rights-respecting processes developed in consultation with affected communities. We recognize that
not all AI uses have equal risk of human rights harms, and that actions required to prevent and respond to
human rights violations will depend on the context. Specifically, private-sector actors should:
1. Conduct human rights due diligence as per the UN Guiding Principles on Business and Human Rights,
and consisting of the following three core steps:
122
1. Identify potentially adverse outcomes for human rights. Private-sector actors should assess risks an AI
system may cause or contribute to human rights violations. In doing this, actors must:
• Identify both direct and indirect harm as well as emotional, social, environmental, or other non-
financial harm.
• Consult with relevant stakeholders in an inclusive manner, particularly any affected groups, human
rights organizations, and independent human rights and AI experts.
• If the system is intended for use by a government entity, both the public and private actors should
conduct an assessment.
2. Take effective action to prevent and mitigate the harms, as well as track the responses. After identifying
human rights risks, private-sector actors must mitigate risks and track them over time. This requires
private-sector actors to:
• Correct the system, including where risks sit with training data, design of the model, or the impact
of the system.
• Ensure diversity and inclusion of relevant expertise to prevent bias by design and inadvertent harms.
• Submit AI systems with significant risk of human rights abuses to independent third-party audits.
• Halt deployment of any AI system in a context where the risk of human rights violations is too high
or impossible to mitigate.
• Track steps taken to mitigate human rights harms and evaluate their efficacy. This includes regular
quality assurance checks and auditing throughout the system’s life cycle. This is particularly important
given the role of negative feedback loops that can exacerbate harmful outcomes.
120 The state of Pennsylvania has worked hard to implement algorithmic transparency in use of automated decision making systems. However,
public comments process and open data standards have not stopped problematic systems from being used. See https://slate.com/technology/2018/07/
pennsylvania-commission-on-sentencing-is-trying-to-make-its-algorithm-transparent.html.
121 See UN Guiding Principles on Business and Human Rights
122 Adapted from the Toronto Declaration
accessnow.org
35
Do'stlaringiz bilan baham: |