In his guide, Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector, supported exclusively by The Alan Turing Institute Public Policy Programme, Dr.
David Leslie writes:
When humans do things that require intelligence, we hold them responsible for the accuracy, reliability, and soundness of their judgments. Moreover, we demand of them that their actions and decisions be supported by good reasons, and we hold them accountable for their fairness, equity, and reasonableness of how they treat others.”
According to Marvin Minsky, who was an American cognitive scientist, co-founder of the Massachusetts Institute of Technology AI laboratory, and who was an AI pioneer, Artificial Intelligence is the science of making computers do things that require intelligence when done by humans.
It is this standard definition that gives us a clue into what motivation has led to the development of the field of applied ethics of Artificial Intelligence.
According to Dr. David Leslie, the need to develop principles tailored to the design and use of AI systems is that their emergence and expanding power to do things that require intelligence has heralded a shift of a wide array of cognitive functions into algorithmic processes, which themselves can be held neither directly responsible nor immediately accountable for the consequences of their behavior.
Program-based machinery, such as AI systems, cannot be considered morally accountable agents. This reality gave room for the creation of a discipline that could deal with the ethical breach in the sphere of the applied science of Artificial Intelligence.
Precisely, this is what the frameworks for AI ethics are now trying to fill. Fairness, accountability, sustainability, and transparency are principles meant to fill the gap between the new smart agency of machines and their fundamental lack of moral responsibility.
On the other hand, when humans do things that require intelligence, they are held responsible. In other words, at the current level in which Artificial Intelligence is operating, humans are the only responsible for their program-based creations.
Artificial Intelligence systems implementation and design must be held accountable. Perhaps in the future, General AI might become moral agents with attributed moral responsibility.
However, for now, engineers and designers of AI systems must assume responsibility and be held accountable for what they create, design, and program.
Reference:
Buchanan, Bruce G., “A (very) brief history of artificial intelligence,” AI Magazine, volume 26, number 4, winter 2005
Bughin, Jacques, and James Manyika, “Bubble or paradigm change?Assessing the global diffusion of enterprise 2.0,” in Alex Koohang, Johannes Britz, and Keith Harman, eds., Knowledge management: Research andapplications, Informing Science, 2008
Glauner, Patrick, “Large-scale detection of non-technical losses in imbalanced data sets,” PhD dissertation, University of Luxembourg, March 2016
Hofstadter, Douglas R., Gödel, Escher, Bach: An eternal golden braid, Basic Books, 1979
Do'stlaringiz bilan baham: |