Facebook has patted itself on the back for nuking almost all “hate speech” that supposedly violated its rules. But not only was most content deleted before anyone could flag it, users weren’t even allowed to appeal most deletions.
Unveiling its Community Standards Enforcement Report for the fourth quarter of 2020 on Thursday, Facebook bragged that its expanded use of artificial intelligence had helped it delete almost twice as much “bullying and harassment” content as the previous quarter, just one of several categories in which removals skyrocketed, while its Instagram subsidiary dramatically expanded its ability to catch suicide and self-injury related content.
Facebook axed 6.3 million bullying items, nearly doubling last quarter’s 3.5 million and assisted in large part by its AI technology. Expanded translation ability helped it remove 26.9 million pieces of “hate speech” content, up from 22.1 million in the third quarter. And Instagram nabbed 6.6 million pieces of hate speech while more than doubling the amount of suicide and self-harm content it removed – from 1.3 million to 3.4 million this quarter.
Despite the numbers growing across the board, there’s no proof the increased deletions necessarily translated to a better user experience. The vast majority of content removal was performed automatically, by the platform’s systems, before a single user could set eyes on it – let alone report it as violating some rule.
Nevertheless, the social media behemoth bragged that 97 percent of “hate speech” removed from the platform was deleted by AI before a single human could see and/or be offended by it, a three-percent increase over the previous quarter and a whopping 16.5 percent increase since the last quarter of 2019.
It’s hard to tell how users felt about the platform’s stepped-up content-babysitting. While Facebook normally allows users to appeal its decisions to delete content, “a temporary reduction in [its] review capacity as a result of Covid-19” meant users’ appeal options were severely curtailed, the platform admitted.
Nevertheless, Facebook Chief Tech Officer Mike Schroepfer could scarcely contain his excitement over the rate of “improvement” of its AI hate speech detection tools, noting that in the last quarter of 2017, just 24 percent of such content had been removed without human intervention and likening Facebook’s anti-bullying tools to life-saving advances in science and technology.
“When you look at the times when new technology has helped address the hardest problems facing our world, from curing diseases to producing safer cars, progress happened incrementally over decades, as technologies were refined and improved,” Schroepfer gushed, boasting that he hears “the same story of steady, continuous improvements these days when I talk to the engineers building AI systems that can prevent hate speech and other unwanted content from spreading across the internet.”
However, as the many dissatisfied Facebook users who have either been booted off the platform or migrated of their own free will have made clear, “hate speech” is in the eye of the beholder. The platform lost two million users in North America alone in the third quarter of 2020, dropping a further million for the fourth quarter.
That didn’t hurt its earnings, but the tech giant has made a number of user-alienating decisions recently that are driving even long-time users to its competitors. The recent ultimatum to users of encrypted messenger subsidiary WhatsApp, essentially demanding users share their private data with Facebook or delete the app, is believed to have swollen the ranks of encrypted messenger app Telegram, which rose in popularity to become the most-downloaded non-gaming app for all smartphones in January.
Like this story? Share it with a friend!