AI can inexplicably detect race when we don’t want it to

What’s happening? AI models used to examine x-rays can identify a patient’s race, but how the algorithms do so cannot be determined, according to a multi-institution study. Researchers tested five radiology research x-ray image types, including mammograms. Despite racial identity not being a biological category, the algorithms detected racial groups with an accuracy of between 80% and 99%. Anatomical or visual features, age, sex, or specific diagnoses were ruled out as classifying factors. Further research into the issue is required, as is educating patients, the authors said. Previous studies have highlighted medical algorithm bias in care delivery, among other areas. (Wired)

But… how? The system discussed above is a type of machine learning which – in a similar way to humans – ingests data and forms an understanding of the connections between different pieces of information.

Somehow – in a way we can’t quite see or understand – the x-rays that these models are being fed have information that allows the AI to make connections and accurately guess the race of patients.

Is that really such a bad thing? On its own, the ability to identify race through machine learning wouldn’t be particularly noteworthy. However, this model wasn’t built to determine race, it is used to identify potentially dangerous health issues (that have nothing to do with race!). What makes this story so concerning is – now that the model has this racial information – how could that knowledge begin to distort its diagnoses?

If a model begins to notice race (where it isn’t necessary), it might begin to make recommendations based on past cases it’s learned from, where racial bias has very much played a part in how people were diagnosed. Allowing it to do that would not only perpetuate old biases and disparities, but it might actually make them more pronounced.

Machine learning algorithms have already been shown to be fallible in this area. In 2019 an algorithm widely used to prioritise care for seriously ill patients was shown to disadvantaged Black patients, while in 2020 one algorithm consistently assigned lower risk scores to Black patients with kidney disease, downplaying the seriousness of their disease. Another, trained to flag pneumonia and other chest conditions, performed differently for people of different sexes, ages, races, and types of medical insurance.

Lateral thought – Health care and medicine aren’t the only areas where we need to be vigilant about machine learning making connections to irrelevant demographic data. AI models are increasingly replacing human judgement in industries including insurance pricing, credit checks, prison sentencing, risk assessment and many more. If such models are picking up on personal data without being asked to, it could compromise analyses that are meant to be objective.

For example, an algorithm used to credit check individuals for a loan might determine that person is Black, and decide (based on a long history of banks denying Black people financial services) to deny them that loan. If a model was set to assess climate risk for a neighbourhood, would it first try to determine whether its residents were high income or low income to then inform its response? Would an algorithm directing medical services prioritise wealthy white people because it worked out they were likely to live longer?

Read more articles

Sign up to Sustt

Share This Post

You might also like

IBAT launches new DLE lithium extraction technology in commercialisation move

What’s happening? International Battery Metals (IBAT), the Houston-based lithium processing company, has launched its own version of a lithium filtration ...

Read more

Claire Pickard
July 25, 2024

Microsoft and Occidental Petroleum sign record carbon credit deal to offset data centre and AI-driven emissions 

What's happening? Occidental Petroleum will sell 500,000 carbon credits to Microsoft over six years in a record carbon credit deal ...

Read more

Sam Robinson
July 18, 2024

Avatar photo

Shell faces $2bn impairment charge over biofuel backtrack

What's happening? Shell has said that it expects to experience an impairment charge of up to $2bn as a result ...

Read more

Claire Pickard
July 17, 2024