Possible Reasons Why YouTube Has Given Up Trying To Police 2020 Election Misinfo
Judging by the number of very angry press releases that landed in my inbox this past Friday, you’d think that YouTube had decided to personally burn down democracy. You see, that day the company announced an update to its approach to moderating election misinformation, effectively saying that it would no longer try to police most such misinformation regarding the legitimacy of the 2020 election:
We first instituted a provision of our elections misinformation policy focused on the integrity of past US Presidential elections in December 2020, once the states’ safe harbor date for certification had passed. Two years, tens of thousands of video removals, and one election cycle later, we recognized it was time to reevaluate the effects of this policy in today’s changed landscape. In the current environment, we find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm. With that in mind, and with 2024 campaigns well underway, we will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections. This goes into effect today, Friday, June 2. As with any update to our policies, we carefully deliberated this change.
The company insists that its overall election misinfo policies remain in place, and the direct forms of dealing with misinformation like doing things such as trying to trick people into not voting remain in place:
All of our election misinformation policies remain in place, including those that disallow content aiming to mislead voters about the time, place, means, or eligibility requirements for voting; false claims that could materially discourage voting, including those disputing the validity of voting by mail; and content that encourages others to interfere with democratic processes.
The company seems to be trying to walk a fine line here, which is unclear if it will work. But in talking this over with a few people, I came up with a few reasons why YouTube may have gone down this path, and it seemed to be worth discussing those possibilities:
- Realizing the moderation had gone too far. Basically, a version of what the company was saying publicly. They realized that in trying to enforce a ban against 2020 election misinfo was, in fact, catching too much legitimate debate. While many are dismissing this, it seems like a very real possibility. Remember, content moderation at scale is impossible to do well, and it frequently involves mistakes. And it seems likely that the mistakes are even more likely to occur with video, in which more legitimate political discourse is mistaken for disinformation and removed. This could include things like legitimate discussions on the problems of electronic voting machines, or questions about building up more resilient election systems which could be accidentally flagged as disinfo.
- Realizing that removing false claims wasn’t making a difference. This is something of a corollary to the first item, and is hinted at in the statement above. Unfortunately, this remains a very under-studied area of content moderation (there are some studies, but much more research is needed): how effective are bans and removals on stopping the spread of malicious disinformation. As we’ve discussed in a somewhat different context, it’s really unclear that online disinformation is actually as powerful as some make it out to be. And if removing that information is not having much of an impact, then it may not be worth the overall effort.
- The world has moved on. To me, this seems like the most likely actual reason. Most folks in the US have basically decided to believe what they believe. That could be that (as all of the actual evidence shows) that the 2020 election was perfectly fair and Joe Biden was the rightful winner or (as no actual evidence supports), the whole thing was “rigged” and Trump should have won. No one’s changing their mind at this point, and no YouTube video is going to convince people one way or the other. And, at this point, this particular issue is so far in the rearview mirror that the cost of continuing to monitor for this bit of misinfo just isn’t worth it for the lack of any benefit or movement in people’s beliefs.
- YouTube is worried about a Republican government in 2025. This is the cynical take. Since 2020 election denialism is now a key plank of the GOP platform, the company may be deciding to “play nice” with the disinformation peddling part of the GOP (which has moved from the fringe to the mainstream) and has decided that this is a more defensible position for inevitable hearings/bad legislation/etc.
In the end, it’s likely to be some combination of all four of those, and even the people within YouTube may not agree on which one is the “real” reason for doing this.
But it does strike me that the out-and-out freakout among some, claiming that this proves the world is ending may not be accurate. I’m all for companies deciding they don’t want to host certain content because they don’t want to be associated with it, but we’re still learning whether or not bans are the most effective tool in dealing with blatant misinformation and disinformation, and it’s quite possible that leaving certain claims alone is actually a reasonable policy in some case.
It would be nice if YouTube actually shared some of the underlying data on this, rather than just asking people to trust them, but I guess that’s really too much to ask these days.
