AI can inexplicably detect race when we don’t want it to

What’s happening? AI models used to examine x-rays can identify a patient’s race, but how the algorithms do so cannot be determined, according to a multi-institution study. Researchers tested five radiology research x-ray image types, including mammograms. Despite racial identity not being a biological category, the algorithms detected racial groups with an accuracy of between 80% and 99%. Anatomical or visual features, age, sex, or specific diagnoses were ruled out as classifying factors. Further research into the issue is required, as is educating patients, the authors said. Previous studies have highlighted medical algorithm bias in care delivery, among other areas. (Wired)

But… how? The system discussed above is a type of machine learning which – in a similar way to humans – ingests data and forms an understanding of the connections between different pieces of information.

Somehow – in a way we can’t quite see or understand – the x-rays that these models are being fed have information that allows the AI to make connections and accurately guess the race of patients.

Is that really such a bad thing? On its own, the ability to identify race through machine learning wouldn’t be particularly noteworthy. However, this model wasn’t built to determine race, it is used to identify potentially dangerous health issues (that have nothing to do with race!). What makes this story so concerning is – now that the model has this racial information – how could that knowledge begin to distort its diagnoses?

If a model begins to notice race (where it isn’t necessary), it might begin to make recommendations based on past cases it’s learned from, where racial bias has very much played a part in how people were diagnosed. Allowing it to do that would not only perpetuate old biases and disparities, but it might actually make them more pronounced.

Machine learning algorithms have already been shown to be fallible in this area. In 2019 an algorithm widely used to prioritise care for seriously ill patients was shown to disadvantaged Black patients, while in 2020 one algorithm consistently assigned lower risk scores to Black patients with kidney disease, downplaying the seriousness of their disease. Another, trained to flag pneumonia and other chest conditions, performed differently for people of different sexes, ages, races, and types of medical insurance.

Lateral thought – Health care and medicine aren’t the only areas where we need to be vigilant about machine learning making connections to irrelevant demographic data. AI models are increasingly replacing human judgement in industries including insurance pricing, credit checks, prison sentencing, risk assessment and many more. If such models are picking up on personal data without being asked to, it could compromise analyses that are meant to be objective.

For example, an algorithm used to credit check individuals for a loan might determine that person is Black, and decide (based on a long history of banks denying Black people financial services) to deny them that loan. If a model was set to assess climate risk for a neighbourhood, would it first try to determine whether its residents were high income or low income to then inform its response? Would an algorithm directing medical services prioritise wealthy white people because it worked out they were likely to live longer?

Read more articles

Sign up to Sustt

Share This Post

You might also like

European Court rules climate inaction is a human rights violation

What’s happening? In a landmark decision, the European Court of Human Rights determined that Switzerland's inadequate efforts to reduce greenhouse ...

Read more

Tom Rejwan
April 25, 2024

Chimpanzee

Clean energy mineral mining threatens Africa’s Great Apes

What’s happening? Up to a third of great apes in Africa could be at risk due to mining for minerals ...

Read more

Claire Pickard
April 18, 2024

Solar Panels

Solar prices are plummeting amid Chinese ‘slave labour’ allegations

What’s happening? Alicia Kearns, Chair of the Foreign Affairs Select Committee, has warned that without stringent laws, Britain risks becoming ...

Read more

Tom Rejwan
April 12, 2024