When things go awry, finding humans to take responsibility can be difficult. In the UK this month, a black former Uber driver whose account was deactivated after automated facial scanning software repeatedly failed to recognise him launched a claim at an employment tribunal.
The first task of an AI Bill of Rights, then, is to strengthen existing protections for an AI world. It should apply to algorithmic decision-making in legal or life-changing areas. And it should extend to data and privacy, enshrining individuals' rights to know what data are held on them, how the information is being used, and to transfer it between providers.
AI decisions should not emerge from an unfathomable black box, but be "explainable". A bill ought to guarantee an individual's right to know when an algorithm is taking decisions about them, how it works, and what data are being used. The right to challenge decisions and obtain remedies should be guaranteed. Some human or corporate responsibility needs to be maintained, with managers accountable for errors or flawed decisions by systems they oversee, as for those by human staff.
But AI gives unscrupulous governments new capabilities to snoop on, control and potentially coerce their citizens. A bill should set out what technologies are permissible or not, and ground rules for their use.
America's Bill of Rights initiative lags behind what Europe is doing. The EU General Data Protection Regulation already contains a right for citizens not to be subject without consent to decisions "based solely on automated processing", though this is not being widely enforced. A proposed AI Act outlines a hierarchy of risks for technologies subject to varying safeguards. Some, such as "social scoring" — nodding to China's social credit system that aims to assess behaviour and trustworthiness — would be banned.
The Biden administration should take up the EU's invitation to work together on AI issues. But just as the UN's 1948 Universal Declaration of Human Rights set out fundamental human rights to be universally protected, so a global AI charter is merited. Some countries would choose to go further; others, like China, might decline to sign up. But as in the cold war, superior protection for human rights, now against intrusive AI, could become a point of moral differentiation, and leverage, for democracies.
- Financial Times