Google fails on its AI ethics promise
Google has failed to keep an ethics promise on artificial intelligence. The technology company agreed to set up an ethics and safety board when it acquired AI company DeepMind three years ago. The board was to ensure the AI technology would not be abused.
DeepMind is a prominent AI player. Its AI system AlphaGo hit the headlines last year when it beat top player Lee Sedol at the ancient Chinese game of Go. Go is considered one of the most complex games in the world.
One of the acquisition’s conditions set by DeepMind’s founders was that Google would create an AI ethics board. As part of the acquisition deal they also stipulated that “no technology coming out of DeepMind will be used for military or intelligence purposes.”
In recent years DeepMind has publicly confirmed setting up the ethics and safety board, arguing the board will be one of the ways in which the company will try to lead the way on ethical issues in AI.
Still DeepMind refuses to detail who is on the board. British newspaper the Guardian reports it “has asked DeepMind and Google multiple times since the acquisition on 26 January 2014 for transparency around the board”.
January last year the paper received its only answer by DeepMind: “There hasn’t really been anything major yet that would warrant announcing in any way. But in the future we may well talk about those things more publicly,” the Guardian quotes DeepMind chief executive Demis Hassabis.
For more than a single reason the ethics board is needed. Earlier this month DeepMind revealed AlphaGo had been secretly taking on more of the world’s best Go players, and beating them. This kind of secrecy does not add to trust in artificial intelligence.