participate in the human rights assessment process
3. Ensure transparency and explainability.
Maximum possible transparency is necessary for any AI
system, including transparency regarding its purpose, how it is used, and how it works, which must continue
throughout a system’s life cycle. Non-disclosure agreements and other contracts with third parties under the
guise of protecting intellectual property are a violation of this principle because they prevent public oversight
and accountability. Specifically, adequate transparency and explainability must include:
• Regular reporting of where and how governments use and manage AI systems
• Use of open data standards in both training data and code to the fullest extent possible, while
adhering to privacy standards
119
• Enabling independent audits of systems and data
• Clear and accessible reporting of the operation of any AI system. This means providing meaningful
information about how outputs are reached and what actions are taken to minimize rights-harming impacts
• Targeted notification when a government AI system makes a decision that impacts an individual’s rights
• Avoidance of “black box systems,” meaning avoidance of any AI system when a person cannot
meaningfully understand how it works
4. Establish accountability and procedures for remedy. The use of an AI system to do a task previously done
by a human does not remove standard requirements for responsibility and accountability in government
decision making processes. There should always be a human in the loop, and for high-risk areas, including
criminal justice, significant human oversight is necessary. Governments should set policies regarding
automation of processes, with an eye to human rights impacts. Additionally, individuals must have the right
to challenge the use of an AI system or appeal a decision informed or wholly made by an AI system. More
specifically, accountability and remedy require:
• Proper training for operators of an AI system. Government employees who use and manage an AI
system must understand how it works, the bounds of its use, and potential for harm. Proper training
ensures humans remain in the loop in a meaningful way and increases the likelihood of spotting
harmful outcomes.
• Establishing responsibility for the outputs of an AI system. Although states often rely on third parties
to design and implement AI systems, ultimate responsibility for human rights interferences must
lie with states. To protect against abuse, government entities must acquire the technical expertise
necessary to thoroughly vet a given system.
• Establishing mechanisms for appealing any given use or specific determination of an AI system.
118 See, e.g., International Principles on the Application of Human Rights to Communications Surveillance, last accessed June 15 2018, available at
https://necessaryandproportionate.org/.
119 See https://www.opengovpartnership.org/sites/default/files/open-gov-guide_summary_all-topics.pdf for more information on open data standards
for government data
accessnow.org
34
Do'stlaringiz bilan baham: |