Self-determination

Wide support for new set of AI principles

At the beginning of February researchers and technology leaders agreed on a set of 23 principles to guide the development of artificial intelligence in an ethical and safe direction. Since then these AI principles have been endorsed by over two thousand experts in this field.

The new guidelines, dubbed the 23 Asilomar AI Principles, were developed after the Future of Life Institute brought dozens of experts together. Attendees of the conference included engineers, programmers, roboticists, legal scholars, physicists, economists, philosophers and ethicists.

The principles are endorsed by a wide range of experts including Demis Hassabis – CEO of DeepMind, Ray Kurzweil – Research Director at Google, Elon Musk – CEO of Tesla, Stuart Russell – Professor of Computer Science, Toby Walsh – Professor of AI and British scientist Stephen Hawking.

Beneficial to humans
The principles are an important step to ensure AI remains beneficial to human beings and stays aligned with human values. Another attempt to write AI guidelines was established late last year by the IEEE Standards Association. The first Asilomar AI Principle is that the “goal of AI research should be to create not undirected intelligence, but beneficial intelligence”.

The principles also state that “any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority” and that “humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives”.

Other principles in the list suggest that any autonomous AI system allowed to self-improve must be strictly monitored. The Future of Life Institute stresses that the principles are a first step towards safe AI. “The Principles represent the beginning of a conversation, and now that the conversation is underway, we need to follow up with broad discussion about each individual principle,” the institute states.

 

Image: FLI