Facebook keeps bungling stamping out hate speech – why?

What’s happening? Facebook executives rejected a recommendation from internal researchers to overhaul its content moderation software over concerns that it could protect some vulnerable groups above others and cause a backlash from “conservative partners”, according to company documents. Facebook has previously reported that its AI moderation technology removes around 80% of hate content, though, under close inspection it has been shown this number largely reflects the AI’s proficiency at detecting hate speech against white people, rather than hate directed towards the five minority groups researchers identified as most threatened on the platform. (Washington Post)

Why does this matter? The lack of effective content moderating on Facebook has plagued the platform for many years. Despite this, efforts to address it seem to continually falter.

In 2018, the company was forced to acknowledge the “determining role” it played in exacerbating violence in Myanmar after the situation was condemned by the UN. A key shortcoming highlighted at the time was that, despite Facebook’s commitments to removing hate speech, it only had a few individuals who spoke Burmese moderating comments. Three years on, the company is still grappling with similar issues.

In the 2021 Capitol Hill riots, Facebook was also said to be instrumental in brewing the misinformation and dialogues of hate that spurred thousands to march on the Capitol building. Documents leaked by former Facebook employee Frances Haugen in October have shown a clear internal awareness about the rising danger these users posed, but the decision was taken that content was to be left on the platform, and mitigation tactics were not to be deployed – despite the fact it was in clear violation of the platform’s guidelines.

The importance of tackling hate speech – While words themselves may not pose any physical threats, the people saying or writing them could. There are a few ways hate speech can translate to material dangers:

  1. It can isolate individuals who feel targeted, potentially causing serious damage to mental health.
  2. It can incite violence – just like it did with the Capitol Hill riots, and is doing so now in Ethiopia’s ongoing conflict.
  3. It can be strategically deployed to silence minority groups – as Myanmar’s military police were able to when committing genocide against Rohingya Muslims.
  4. It can manipulate readers through fake information to foster more extremist beliefs. Terrorist groups, anti-vaxxers and climate deniers are all known to use social media to attract followers.

Why has this problem persisted? In short, content moderation is hard work and those doing it often aren’t supported. Some of Facebook’s moderators – contractors the company outsources work to – broke non-disclosure agreements to flag the impossible job they were faced with earlier this year, saying they were charged with watching violent videos and reading graphic content without emotional support or training. Human moderators have also been shown to be highly ineffective when it comes to the billions of posts made to the platform daily.

Additionally, large technology platforms often lack the checks and balances on their development due to the lack of technical expertise and understanding at the regulatory level. While some countries like Australia, Singapore and Canada have attempted to curb their influence, the effect has been minimal.

What’s next? Lina Khan, the chairperson of the US Federal Trade Commission, is expected to mount a crackdown on social media firms to ensure increased accountability rather than monopoly.

Additionally, in November, Carnegie UK Trust released a series of recommendations for companies to ensure best practice and equity for all of their users, including algorithm safety tests, non-English language moderators and protections for minorities. Another important step is for companies to listen to their employees when they voice concerns, and work with them to develop strategy on ethical issues informed by diverse perspectives.

Share This Post

You might also like

IBAT launches new DLE lithium extraction technology in commercialisation move

What’s happening? International Battery Metals (IBAT), the Houston-based lithium processing company, has launched its own version of a lithium filtration ...

Read more

Claire Pickard
July 25, 2024

Microsoft and Occidental Petroleum sign record carbon credit deal to offset data centre and AI-driven emissions 

What's happening? Occidental Petroleum will sell 500,000 carbon credits to Microsoft over six years in a record carbon credit deal ...

Read more

Sam Robinson
July 18, 2024

Avatar photo

Shell faces $2bn impairment charge over biofuel backtrack

What's happening? Shell has said that it expects to experience an impairment charge of up to $2bn as a result ...

Read more

Claire Pickard
July 17, 2024