Autonomous police units could deepen racial inequality

What’s happening? The New York Police Department (NYPD) has cancelled a contract worth around $94,000 with Boston Dynamics for its robot Digidog after public criticism over the high cost and fears of police militarisation. The 32 kg Spot robot was unveiled in December to “save lives, protect people, and protect officers”, according to the NYPD. NY Mayor Bill de Blasio dubbed it “creepy”, while Congresswoman Alexandria Ocasio-Cortez asked why such “world class” technology wasn’t funneled toward education, housing or health care. The contract with Boston Dynamics was terminated in April, four months ahead of schedule. (NPR)

Why does this matter? Issues surrounding US police conduct have been thrust into the spotlight after public outcry following the deaths of George Floyd, Breonna Taylor and several other Black Americans at the hands of police officers. Derek Chauvin was found guilty of the murder of George Floyd prompting the launch of a US Justice Department investigation into Minneapolis policing.

With confidence in the police at a record low in the US, autonomous units could be considered a potential solution by removing human involvement. The inclusion, however, of bias datasets used to train policing algorithms demonstrates that the human element remains, producing racist predictive responses that could have severe social repercussions if left unchecked.

Well-documented issue  Prominent figures within AI ethics have voiced their concerns about the application of AI by large organisations. Timnit Gebru was fired from Google for suggesting a willingness to ignore the full scope of AI’s social flaws. Despite the evidence, real-world applications of policing AI are already in use and producing biased results. To address this issue, alternative algorithmic training has been implemented in some cases, using victim reports instead of arrest data. Bias, however, still remained even when chosing this process.

Policing examples – Facial-recognition technology in New Jersey misidentified a Black man which led to false imprisonment for 10 days. Police, however, continue using AI and have integrated facial recognition with autonomous robots in California to detect blacklisted passersby. Their effectiveness with mundane policing tasks has also been questionable, with one citizen reporting that while attempting to report a brawl to a robot, it told her to “step away” before launching into song.

Allocation of funds – NYPD signed a $94,200 contract for the robot prompting debate about the use of public money. The unit was originally conceived for work on industrial sites and oil fields, not police work. Purchasing such an expensive device to monitor low-income communities when they are struggling the most during the Covid-19 pandemic will likely fuel calls to defund the police and re-direct money into communities.

This article first appeared in our weekly newsletter, Sustt.

Read more articles

Sign up to Sustt

Share This Post

You might also like

Non-profits call for carbon credits to be excluded from transition plans

What's happening? A group of more than 80 non-profits, including Greenpeace, Client Earth and ShareAction, have issued a statement urging ...

Read more

Nicola Watts
July 10, 2024

Avatar photo

Geothermal power presents opportunity for tech giants to decarbonise spiralling AI energy demand

What's happening? Google’s climate goals are threatened as data centres powering its AI products drive up emissions. Over the past ...

Read more

Dillon Creedon
July 9, 2024

Geoengineering to reduce US heat could worsen European heatwaves: study

What’s happening? The use of the geoengineering technique of marine cloud brightening (MCB) to reduce high temperatures in California could ...

Read more

Claire Pickard
July 2, 2024