FAQ
< All Topics

Aren’t some people marginalised by AI applications?

Data bias

The inherent bias of data sets used in AI algorithms training can result in some groups or individuals being treated unjustly or without redress by human arbiters.

Getting left behind

There are also risks associated with having all pervasive AI adoption without viable non-digital alternatives or options to opt out. The categories of risk as they stand do not include the risk to people being excluded in society and/or exclusionary practices which means certain people or people groups get left behind. We must protect the vulnerable and marginalised, recognising that individual categorisation and optimisation (not just social credit scoring) may cause wider injustice to a people group.

 

 

Table of Contents