AI systems: Non-transparent, unexplainable, or unjustifiable outcomes
In some cases, machine learning models may generate their results by operating on high-dimensional correlations that are beyond the interpretive capabilities of human reasoning.
These are cases in which the rationale of algorithmically produced outcomes that directly affect decision subjects may remain opaque to those subjects. In some use cases, this lack of explainability may not be a cause of too much trouble.
However, in applications where the processed data could harbor traces of discrimination, bias, inequity, or unfairness, the lack of clarity of the model may be deeply problematic.
Do'stlaringiz bilan baham: |