Facebook's Transparency around Content Moderation is Deceptive
Facebook (like every other platforms) touts its content moderation and content removal numbers really well. Here’s just a sample from the last few months:
-
September 13, 2020: Facebook says it will remove false claims about who started wildfires in Oregon
-
August 19, 2020: Facebook Removes 790 QAnon Groups to Fight Conspiracy Theory
-
August 11, 2020: Facebook says it has taken down 7 million posts for spreading coronavirus misinformation
-
August 11, 2020: Facebook Pulls 22.5 Million Hate Speech Posts in Quarter
-
June 30, 2020: The company removed 220 accounts, 95 Instagram accounts, 28 pages and 106 groups in the boogaloo network
-
May 12, 2020: Facebook removes 10 million posts for hate speech as tensions rise on the social network
These numbers in theory sound great. Who wouldn’t want millions of posts with hate speech removed? Who would be opposed to remove dangerous terrorists off the platform?
However, this is deception and manipulation. This is Facebook PR telling you that only Facebook can solve problems of content moderation because of its scale (without realizing that the scale is what caused these problems).
What never gets answered in these press releases is what reach these posts had before they were taken down. You may have heard of Facebook removing the Plandemic propaganda, but what you may not have heard about is that it only happened after the video was viewed millions of times. In other words, the damage was already done.
Facebook wants you to know that it took down millions of posts spreading coronavirus misinformation. But what it won’t tell you is how many people saw those posts, and how many people believed them enough to share them. Facebook won’t tell you how many people were harmed in real life due to its inaction.
Facebook wants you to know that it removed hundreds of white nationalist terrorists from its platform. But it won’t tell you how many other people were already manipulated by those accounts. It won’t tell you what their “reach” was. It won’t tell you how many other people were brainwashed into taking arms before they were banned.
The other thing Facebook refuses to do is correct the record after taking punitive action against its users. If you see a false coronavirus post that Facebook later deletes, you won’t see a correction to the earlier post. The net effect of that is that the platform lets misinformation spread faster than it ever could, eventually takes it down (sometimes), but never corrects it. The damage is done! 1 2
So what’s the solution? Facebook will have you believe that its elusive AI-based content moderation will solve the problem of policing the platform. The fact of the matter is that no algorithm is going to work. Even with a 99% accuracy rate ― which may never happen ― we’re talking about millions of dangerous users and posts being left up for hours, or days, or forever.
No content moderation system is going to be perfect. Humans won’t be perfect, AI won’t be perfect. Short of shutting it down entirely ― which can be a net positive for humanity ― what may work is to limit the reach of any content, especially dangerous content across its network.
Facebook’s scale is the problem. It’s much easier for a teacher to manage a classroom of a dozen rowdy kids than a lecture hall of a thousand kids. Facebook has far too much power over far too much of the world’s population, and a solid re-thinking of the anti-trust framework may break that hold.
1: Facebook will show fact check notices next to some content, the fact check policies themselves are influenced by the company’s business interests.
2: Facebook must send notifications to users that have engaged in any form with misinformation and disinformation correcting the record.