Self-determination

AI Now: governments should not use black box AI

Governments should not use black box algorithms until decisions made by these AI systems are well understood, according to a group of researchers. “Core​ ​public​ ​agencies,​ ​such​ ​as​ ​those​ ​responsible​ ​for​ ​criminal​ ​justice,​ ​healthcare, welfare,​ ​and​ ​education​ ​should​ ​no​ ​longer​ ​use​ ​“black​ ​box” AI​ ​and​ ​algorithmic​ ​systems”, research group AI Now writes.

“The use of such systems by public agencies raises serious due process concerns, and at a minimum they should be available for public auditing, testing, and review, and subject to accountability standards”, the AI Now researchers from New York University, Google Open Research and Microsoft Research state.

AI systems are already being used in areas such as law, finance, policing and at the workplace, where they are used to predict everything from our likelihood of committing a crime to our fitness for a job. In the field of labor “new systems make promises of flexibility and efficiency, but they also intensify the surveillance of workers, who often do not know when and how they are being tracked and evaluated, or why they are hired or fired”.

Another example of current deployment of AI is police body camera footage which is being used to train machine vision algorithms for law enforcement, raising privacy and accountability concerns. According to the researchers “training data, algorithms, and other design choices that shape AI systems may reflect and amplify existing cultural assumptions and inequalities”.

The AI Now researchers draw this conclusion: “Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed”.