3. Artificial Intelligence (AI)
179
due to the fact that the chain of responsibility for
AI-based decision-making ends at an algorithm, not
an individual. Advice systems providing no-obliga-
tion support are one example of such a core area.
Here too, however, complex legal questions can arise
very quickly if the people affected miss deadlines or
encounter other problems as a result of recommen-
dations made to them. Public authorities can also be
active in areas that involve objects (rather than indi-
viduals). AI applications in automated transport are
one potential area of focus in this context, as is the
issue of efficiency savings in public services. The
whole field of the “smart city” is thus also often cited
as an area with great potential for AI. Dedicated ap-
plications here might include traffic forecasts and
other predictions as well as route optimisation, but
also ways to optimise energy consumption in build-
ings or predictive urban development.
3.8 Ethics and AI
Although the use of AI can benefit both individuals
and society as a whole, it can also bring major risks
and significant consequences – the latter being hard
to predict and quantify. In particular, the EU Commis-
sion’s Expert Group on Artificial Intelligence
187
has
identified risks to democracy, the rule of law, distrib-
utive justice and the human mind.
The use of personal data harbours the risk of
asymmetries of power and information being magni-
fied and abused. These asymmetries can be found in
all areas of life, such as between teachers and stu-
dents, between companies and consumers and be-
tween employers and employees. Children and young
people need particular protection in this regard. Data
used for AI purposes must therefore be prepared
187 See High-Level Expert Group on Artificial Intelligence (2019, 2).
188 See Federal Ministry of Science, Research and Economy (BMWFW) and Federal Ministry for Transport, Innovation and Technology
(BMVIT) (2016).
189 See High-Level Expert Group on Artificial Intelligence (2019).
transparently at all times so that users always know
what data are being stored and why they are being
used. Two key questions thus arise: who are the
stakeholders providing AI systems, and how do they
treat the data generated? The danger here is that
individual multinational companies and platforms
gain increasing influence over fundamental areas of
our lives such as education and healthcare through
their hardware and software.
Indeed, some AI processes do not allow users or
even the programmers themselves to see what fac-
tors are determining the AI’s interaction with its en-
vironment. This is because, although the underlying
algorithms were created by programmers, they draw
their own conclusions via self-learning (“black box”).
This situation can result in a lack of transparency.
Open-source technologies thus need to offer the
benefit of greater transparency. Austria’s Open Inno-
vation Strategy explicitly mentions the anchoring of
open science
, i.e. striving towards an open, collabo-
rative approach by researchers working closely with
stakeholders and civil society.
188
As algorithms con-
tinuously evolve based on user behaviour in order to
adapt their own behaviour, to a certain extent they
reproduce the racism and sexism inherent in the un-
derlying data structure. In its Ethics Guidelines,
therefore, the European Commission’s Expert Group
on Artificial Intelligence recommends that users
should always
“be given the knowledge and tools to
comprehend and interact with AI systems to a satis-
factory degree and, where possible, be enabled to
reasonably self-assess or challenge the system.”
189
The principle of user autonomy has to underpin the
workings of an AI system. The primacy of human
agency and human oversight over AI is thus one of
the principle ethical guidelines. For this reason, Arti-
cle 22 of the European General Data Protection Reg-
Do'stlaringiz bilan baham: |