Will a bill of Digital Rights protect us?
Careful framing
A bill of digital rights could go a long way to protect humanity, but only if it is framed carefully and with a full understanding of what it means to be a human being with a clearly defined set of harms that AI applications can cause human beings.
These could include the right not to be manipulated, not to be exploited based on any attribute which might make them vulnerable, or otherwise take unfair advantage of a vulnerability (known to the person or not), not to suffer exclusion, detrimental or unfavourable treatment as a result of an interference or prediction of personal or personality traits, without consent or a lawful, equitable and justified reason to do so, the ability to seek human review and redress etc.
Not too narrow
Any bill of digital rights that seeks to ensure consistency with the EU Charter of Fundamental Rights and the existing secondary Union legislation on data protection, consumer protection, non-discrimination and gender equality, will be too narrow a scope in regard to the impacts of AI on humanity since it does not pick up on harms that fall outside of these concerns.
European Evangelical Alliance
response to EU white paper 2021What concerns?
Moral Agency
It harms humanity to delegate moral agency to an artefact because humans uniquely hold moral agency.
Relationships
AI can devalue, de-humanise, and undermine human-to- human relationships. There is also a potential risk of dependency (both emotional and physical) on humanoid, embodied or personified AI robotics, causing concerns over human-to-machine relationships.
Cognitive Skills
AI applications can cause the slow erosion and phasing out of human cognitive skills like critical thinking.
Dignity
Even though the right to human dignity lies at the very heart of fundamental rights, that respect for human dignity should be followed through into where AI can be used for human augmentation, for nurturing or caring for the elderly, sick, and inform, or after death in respect of representations of a deceased person. There is also dignity in work and a bill of digital rights should cover the need to balance efficiency that AI software and robotics can provide with preserving human work.
Limiting life's opportunities
Whilst it is acknowledged that both private and public sector organisations need to manage day to day operational risks associated with people’s actions and behaviours (such as criminal or fraudulent acts, or poor management of finances and credit repayment behaviours), a digital bill of rights should afford protection or safeguarding against unfair and/or biased outcomes which limit life opportunities based on a person being profiled on the sum of their acts and/or behaviours, or more particularly the sum of the acts and behaviours of people with similar features and attributes to them. This does not just impact on education, but on employability, creditworthiness, law enforcement, migration and administration of justice.
Digital Exclusion
There are risks associated with having all pervasive AI adoption without viable non-digital alternatives or options to opt out. A bill of digital rights should include the risk to people being excluded in society and/or exclusionary practices which means certain people or people groups get left behind. We must protect the vulnerable and marginalised, recognising that individual categorisation and optimisation (not just social credit scoring) may cause wider injustice to a people group.
Environmental Impact
Al can reduce negative impact to the environment, but can also contribute to it. It has a second order impact on the ability of humans to flourish, in particular if this effects food and water supplies, due to global warming.
Individual vs Group
As fundamental rights are first and foremost individual rights, this lens could easily forego the impact of AI on humanity, human beings in the aggregate, and on society as a whole. Society is more than a mere banding together of individuals and individual rights. Any protections in a bill of digital rights should have due regard to this nuance whose frequency is likely to be higher where profiling, optimisation and categorisation of people groups occur. Hence, group harms are likely to occur more regularly than individual harms, yet (as it stands) there is no real effective means of those impacted and/or influenced by AI within a group who have suffered collective harm, to appreciate that harm or be aware of others who have been impacted and/or influenced in the same way.
Harms may take time to show
It is important to stress here that harm will not always be tangible and/or visible, but harm will have nonetheless occurred. It may take some time for that harm to transpire particularly where it impacts groups of people who are unknown one to another. Harm can be psychological, physical and economic, however quantifying harm done to a society spread out over time does not fall easily into these determinants which have a much more individualistic feel to them.