Self-determination

Stanford: AI is raising ethical questions for doctors

Researchers from the Stanford University School of Medicine say artificial intelligence (AI) is raising ethical questions that healthcare providers must deal with before they start working with this technology.

“Remaining ignorant about the construction of machine-learning systems or allowing them to be constructed as black boxes could lead to ethically problematic outcomes,” the scientists write in the New England Journal of Medicine.

“What if the algorithm is designed around the goal of saving money?” they ask. “What if different treatment decisions about patients are made depending on insurance status or their ability to pay?”

Medical AI is mostly being used to help doctors come up with sound diagnoses and treatment plans. The scientists from Stanford University warn however that biases could inadvertently be introduced into algorithms being used by doctors, as already has been the case in other sectors.