MIT toolbox to make AI less biased

Researchers at MIT claim they found a way to overcome biases in AI systems. The scientists have built a toolbox which helps machine learning engineers ask the right questions about their data in order to diagnose why their systems are making unfair decisions.

For the research the scientists looked at an AI income-prediction system. They found that it was twice as likely to misclassify female employees as low-income and male employees as high-income.

The researchers were able to identify potential causes for bias and quantify each factor’s individual impact on the data used to train the AI system. By getting more data from underrepresented groups, the scientists were able to reduce the bias.

Recently several AI systems have displayed serious instances of bias, discriminating against minority groups in the justice system, in banking, insurance etc. MIT’s approach is the latest attempt to tackle bias.

Two months ago IBM launched a bias detection cloud service for AI, and Google has introduced AI bias visualization, none of them guaranteeing complete bias-free AI.