Curation ESG
December 3, 2021
Sara Trett
What’s happening? Facebook executives rejected a recommendation from internal researchers to overhaul its content moderation software over concerns that it could protect some vulnerable groups above others and cause a backlash from “conservative partners”, according to company documents. Facebook has previously reported that its AI moderation technology removes around 80% of hate content, though, under close inspection it has been shown this number largely reflects the AI’s proficiency at detecting hate speech against white people, rather than hate directed towards the five minority groups researchers identified as most threatened on the platform. (Washington Post)
Why does this matter? The lack of effective content moderating on Facebook has plagued the platform for many years. Despite this, efforts to address it seem to continually falter.
In 2018, the company was forced to acknowledge the “determining role” it played in exacerbating violence in Myanmar after the situation was condemned by the UN. A key shortcoming highlighted at the time was that, despite Facebook’s commitments to removing hate speech, it only had a few individuals who spoke Burmese moderating comments. Three years on, the company is still grappling with similar issues.
In the 2021 Capitol Hill riots, Facebook was also said to be instrumental in brewing the misinformation and dialogues of hate that spurred thousands to march on the Capitol building. Documents leaked by former Facebook employee Frances Haugen in October have shown a clear internal awareness about the rising danger these users posed, but the decision was taken that content was to be left on the platform, and mitigation tactics were not to be deployed – despite the fact it was in clear violation of the platform’s guidelines.
The importance of tackling hate speech – While words themselves may not pose any physical threats, the people saying or writing them could. There are a few ways hate speech can translate to material dangers:
Why has this problem persisted? In short, content moderation is hard work and those doing it often aren’t supported. Some of Facebook’s moderators – contractors the company outsources work to – broke non-disclosure agreements to flag the impossible job they were faced with earlier this year, saying they were charged with watching violent videos and reading graphic content without emotional support or training. Human moderators have also been shown to be highly ineffective when it comes to the billions of posts made to the platform daily.
Additionally, large technology platforms often lack the checks and balances on their development due to the lack of technical expertise and understanding at the regulatory level. While some countries like Australia, Singapore and Canada have attempted to curb their influence, the effect has been minimal.
What’s next? Lina Khan, the chairperson of the US Federal Trade Commission, is expected to mount a crackdown on social media firms to ensure increased accountability rather than monopoly.
Additionally, in November, Carnegie UK Trust released a series of recommendations for companies to ensure best practice and equity for all of their users, including algorithm safety tests, non-English language moderators and protections for minorities. Another important step is for companies to listen to their employees when they voice concerns, and work with them to develop strategy on ethical issues informed by diverse perspectives.
© 2024 Curation Connect
To keep up-to-date with the companies showcased