HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
The bottom line: The scale, proliferation, and real-life impact of AI demands attention.
The proliferation of AI in data analytics has come with the rise of big data. In her 2015 book
Weapons of Math
Destruction, data scientist Cathy O’Neil documented how algorithmic decision-making is now ubiquitous
in the West, from assigning credit scores, to identifying the best candidates for a job position, to ranking
students for college admissions.Today, these algorithmic decision-making systems are increasingly
employing machine learning, and they are spreading rapidly. They have many of the same problems as
traditional statistical analysis. However, the scale and reach of AI systems, the trend of rapid, careless
deployment, the immediate impact they have on many people’s lives, and the danger of societies viewing
their outputs as impartial, pose a series of new problems.
V. HELPFUL AND HARMFUL AI
Every major technological innovation brings potential to advance or damage society. The data processing and
analysis capabilities of AI can help alleviate some of the world’s most pressing problems, from enabling
advancements in diagnosis and treatment of disease, to revolutionizing transportation and urban living,
to mitigating the effects of climate change. Yet these same capabilities can also enable surveillance on a
scale never seen before, can identify and discriminate against the most vulnerable, and may revolutionize
the economy so quickly no job retraining program can possibly keep up. And despite major strides in the
development of AI, the so-called “artificial intelligence revolution” is only a decade old, meaning there are
many unknown possibilities in what is to come.
Below we identify some of the ways AI is being used to help or harm societies. It is important to note that
even the “helpful” uses of AI have potentially negative implications. For example, many applications of AI in
healthcare pose serious threats to privacy and risk discriminating against underserved communities and
concentrating data ownership within large companies. At the same time, the use of AI to mitigate harm may
not solve underlying problems and should not be treated as a cure for societal ailments. For example, while
AI may alleviate the need for medical professionals in underserved areas, it isn’t providing the resources
or incentives those professionals would need to relocate. Similarly, some of the use cases categorized as
“harmful” came about as a result of good intentions, yet are causing significant harm.
Do'stlaringiz bilan baham: |