Policy-makers and experts need to tackle AI bias
Policy-makers must engage with experts in machine learning to tackle bias in artificial intelligence, researchers at the Cornell Institute for Public Affairs (CIPA) say.
For an AI system “that determines eligibility for food stamps or advises on criminal sentences, the cost of bias is severe and likely worth the expense of encoding values”, the researchers write.
“Machine learning complicates blame, legal liability and other post hoc remedies because these systems don’t make decisions like traditional software”, the scientists state.
“If policy-makers do not engage in this area, then they are implicitly adopting a utilitarian approach to the results these systems produce.” And that leads to biases.
One bias example are ads for job openings delivered by AI. A system devised by Google was delivering job ads to women that paid less than the job openings that were targeted towards men.